Is it no longer possible to copy permalinks to Slack messages? Or am I just not finding it in the redesigned menu?
I suspect this is a byproduct of a bad continuous integration staged rollout involving feature flags... đ¤
If you choose "Share Message" (which, nb, I do not see on top-level posts in my desktop client, only replies) is copying the link one of the share options â as a button in the bottom left of the panel that opens?
Unfortunately not. It's what I used in my post to #random
earlier to link to a thread in #general
, but it just includes the thread preview as an attachment without letting me put the reference in the message itself.
FWIW, that was in Safari. I opened in Chrome and none of your profile images are loading, but I get both "Share messageâŚ" and "Copy link"âŚ
Yeah, so it's almost certainly a bad continuous integration staged rollout involving feature flags RabbitMQ prolog facebook SeaTac Aaron Patterson cat dogecoin grandparent escape hatch first album by Tool like thumbs subscribe fav deploy on a friday.
A: current codebase B: current codebase except some of the features don't work
MSG NOT FOUND
I could have sworn I got to this video from someone else linking to it nere but it shows that in one form Excel is that environment. Excel is a purely functional programming language that is very visual with immediate feedback.
MSG NOT FOUND
Yes, there are many layers of verification and validation in the process of accreting scientific knowledge, so there is a lot of redundancy that makes for robust results. But you have to be a domain expert to judge how robust any given quoted result actually is. In highly visible domains, which epidemiology has suddenly become, there are always people who oversell unreliable small-scale findings to the public, and others who criticize robust results with superficial arguments. Science works best when politics doesnât care about its outcomes.
Here is a very hard-hitting blog article on the code quality of the Ferguson model. "First of all, the elephant in the room: code quality. It is very difficult to look at the Ferguson code with any understanding of software engineering and conclude that this is good, or even tolerable..for thirteen years, taxpayer funding from the MRC went to Ferguson and his team, and all it produced was code that violated one of the most fundamental precepts of good software development â intelligibility."
https://chrisvoncsefalvay.com/2020/05/09/imperial-covid-model/
Ease of reading is a major objective of my Beads project, and i feel that it has been achieved. I am contemplating adding a feature where you can insist on a certain percentage of commenting in critical areas like function formal parameters, which when stripped down to single letters can be hard to read.
As so often, that's a comment from a software professional who seems quite ignorant about what he criticizes: research. The statement that "for thirteen years, taxpayer funding from the MRC went to Ferguson and his team, and all it produced was code..." is very wrong. Computational scientists all around the world still struggle to get funding for software development. Public research funding is attributed for producing research papers, not software. And people who invest too much time on software development don't get jobs in academia. And that's a big part of the problem. Decision makers in public research have only very recently begun to understand that code is both and important tool for doing science and an important output of research. And I hope that the story of this model will help to accelerate this process.
His bio said he was a virologist and did computational work so I think the useful content of that post is âsomeone doing very similar work has concerns about Fergusonâs modelâ, which is more interesting than âsoftware professionals in an unrelated area doâ.
Building a company on Django in 2020 seems like the equivalent of driving a PT Cruiser and blasting Faith Hillâs âBreatheâ on a CD while your friends are listening to The Weeknd in their Teslas. Swimming against this current isnât easy, and not in a trendy contrarian way.
âŚ
If Wikipedia were started today, itâd be React. Maybe?
What if everyoneâs wrong? Weâve been wrong before.
Pairs well with https://sourcehut.org/blog/2020-04-20-prioritizing-simplitity/ - sourcehut has barely any javascript, but it still often manages to be snappier than github.
Microsoft bought GitHub for 7 billion dollars. GitLab is valued at 3 billion dollars. Together, the two companies have over 2,000 employees. SourceHut made 4 thousand dollars in Q1 with two employees and an intern. How do we deliver on this level of reliability and performance compared to these giants? The answer is a fundamental difference in our approach and engineering ethos.
yeah, I love React but Iâve also been feeling like it might be an evolutionary dead end. We spend a lot of time fighting with React to get better page performance. I thought the support for SSR would be a lot easier and better by now, but itâs still an advanced technique, hard to get right
The trick to competing with a behemoth is to find some difficult, seemingly essential aspect what they're doing, then don't do it. For instance with website responsiveness, design so that you get 80% of the dynamicity you're hoping for but with... let's check a project here... 113 lines of JavaScript.
I'm not sure if there's a better channel for this... I'm working on WhiteBox, a live code previsualizer/debugger for compiled languages I'll be streaming some development in 10 hours: https://handmade.network/whenisit?t=1589223600&n=Andrew%27s%20WhiteBox%20dev%20stream&u=twitch.tv%2Fazmreece⌠Watch on https://twitch.tv/azmreece I have live voice chat on while I stream, which you can join here: https://discord.gg/xHgepxM
This looks great! Unfortunately I'm in the middle of my workday.
I did a bunch of streaming for a couple of people over the weekend, so this whole type of activity is making sense to me now in a way it never did before.
Thanks for the encouragement! Maybe next time, I'd very much like your input đ
Maybe the terminal will evolve into a useful UI before UIs do?
https://www.willmcgugan.com/blog/tech/post/real-working-hyperlinks-in-the-terminal-with-rich/ and https://www.nushell.sh/ show more interesting potential than anything new going on on UIs I've seen lately
Interesting! BTW, has anyone here played with Nushell already? There are some good ideas there but I wonder if it is ready for day-to-day work.
I tried it, last release got some support for alias which was the last thing I needed to switch, I tried it for a day and the fact that I can't "splice" arguments in aliases made me go back, but that's just because I got some of those alias that I use a lot
Was curious on reading more about the terminal hyperlinks - this looks helpful! https://gist.github.com/egmontkob/eb114294efbcd5adb1944c9f3cb5feda
MSG NOT FOUND
Oh no, there was never an announcement in #end-user-programming
. It did get announced on #general
: [Lobste.rs](https://futureofcoding.slack.com/archives/C5T9GPWFL/p1588578156478000> It also got broadcast on Twitter a bunch, and I mentioned it on my usual haunt: <http://Lobste.rs) (though I see I missed Mastodon)
https://en.wikipedia.org/wiki/Analysis_paralysis
I was wondering if people here have had to deal with this problem, it seems to me when trying to innovate this may be particularly troublesome (and their strategies to cope)
I was also wondering if there is more slang around this, other than "Analysis Paralysis", just out of curiosity. I came up with "Research winter" but I'm sure there must be other terms out there đ
Say more? Iâm curious at what level youâre thinking about this. Is this at the level of individual thought, group action, or something broader? Also, what do you see as the connection between Analysis Paralysis and Research Winter? I wouldnât naturally connect those ideas.
my definition of "research winter" is just a period of time when you have an idea but you don't know how to implement it, if someone has done something similar already, etc.... In my case I tend to go through a BFS of "prior art", afraid I may be wasting my time in something that has been done before (worse: proven wrong or useless before), etc
while you are looking for prior art you don't actually produce anything, out of caution, perhaps
or the thought that there might be out there an optimal way to achieve something.
Oh! there's definitely an element of Yak Shaving involving, so that's definitely related
A related phenomenon Iâve experienced is doubting your tooling. I donât have a good name for it, but I find it hard to settle on a toolset for a project. I have found it helpful to start by building one to throw away with a minimal set of tooling Iâm comfortable with (and then I donât throw it away).
right, I guess when you have no restrictions is hard to settle down for a particular language, for instance
One example is that you are designing a new language on github, and a few people have graciously submitted bug reports or contributed changes. Now you have users! There are some new language features you would like to add, but what about maintaining backward compatibility. If you screw up an experimental language change, then you'll need to later go back and make a non-backward compatible change. Which would be terrible! Better get the design perfect the first time.
The struggle is real. Best is to have a good absolutely-not-going-to-do list.
Emmanuel Oga I rejected... three bug reports today using the not-to-do list.
MSG NOT FOUND
oh, been reading these patterns on using named graphs: https://patterns.dataincubator.org/book/data-management-patterns.html
đ Episode 47 - Max/MSP & Pure Data with Miller Puckette đ§
I heard an interview with Miller on another podcast (linked in the show notes), which focussed on the history of Max and Pd and the arc of his career. He dropped a few kernels of design-related ideas that went unexplored, so I decided to bring Miller on our show and have him go in-depth. We talked about the design of Max's scheduler (vs other kinds of realtime scheduling available in the early 80s), how he arrived at the visual "patcher" interface, why Pd looks and feels so spartan compared to Max, to what extent the patcher interface is actually visual as opposed to just a fancier CLI, and other notions that'd be interesting to folks designing their own live/visual programming environments.
This episode is the shortest, tightest interview I've done yet. I'm also slowly dialling-in the sonic identity that I'd like the show to have. Not there yet, but getting closer.
Enjoy! And please help spread the word about the show if you have a good way to do so.
Show notes & detailed transcript: https://futureofcoding.org/episodes/047
This surpassed the previous 1-day download record by 62%! Glad it's getting spread around a bit.
Enjoyed it; I had heard the original poor quality podcast a few weeks ago; this was much better đ
I really enjoyed this episode, thanks! In fact it is after playing with Pd/Max/Max4Live for some time that I wanted to explore programming by non experts. It was even before I discovered Scratch.
There are very inspiring discussions in this episode and, Ivan Reese, I like the way you ask Pluckette some questions which make him consider things from other point of views.
At some point Pluckette says that PureData is terrible for some programming tasks (the example he gives is finding prime numbers) and that "Pd really needs a text language of some kind".
Well, it is something I felt a lot, when doing some patch for Max4Live to simply process OSC information, which was a frustrating experience with visual Max/MSP. At that time I actually was able to do that using JavaScript because MaxMSP allows to do that, which was way easier, even if you had to edit your JavaScript code in an external editor, which is unnecessary complicated.
It is this experience of complicated way to combine visual and text programming that makes me want to explore some programming environment that mixes the two, which eventually led me to work on my zed editor project 6 years ago. I hope I will find some time to post some TMW videos about it (đ¤đ¤ ).
And later on in the discussion he seems to agree that what is missing is more a way to program with another paradigm, like imperative programming.
I guess he would agree that it doesn't need to be a text language. I think that this is an important point: box'n arrows are very expressive for some kind of programming tasks but for others we must look for some other visual representations that are more expressive.
And my point of view now, is that a combination of box'n arrow and block programming (Ă la Scratch) can be a very good start.
It seems to me like he doesn't actually much care for visual design. Thus, the distaste for bevels and drop shadows, seeing those things as superfluous eye candy rather than tools that designers can use to create meaning.
He built the patcher as a tool for non-programmers. My whole philosophy is: build a visual tool for expert programmers, where "programmer" is more of a way of thinking than a particular skillset.
So his inclination to reach for text and an imperative paradigm feels like a blind spot â and an unexpected one, considering he basically defined how most people conceive of visual programming.
To me, that's terrifying. We might have been better off if Max and Pd hadn't existed, and visual programming had instead been popularized by someone with a knack for visual communication.
It occurred to me after publishing the episode that I missed a good opportunity to synthesize and reflect on the interview. As the host, and de facto curator of which ideas make it to the show (and which ones get cut â like the red herring discussion about whether "bang" is feminist or not), I should probably spend a few minutes at the end of the episode recapping the key ideas and offering my take. After all, I have the benefit of hearing the interview multiple times, living with it for a few weeks while I work on the edit, whereas the listeners will likely only hear it once, and might not have the occasion or attention or energy to do that synthesis.
On finding primes graphically, something like this https://www.youtube.com/watch?v=R8zqqLlrnQM comes to mind.
Ivan Reese said: > It seems to me like he doesn't actually much care for visual design. Thus, the distaste for bevels and drop shadows, seeing those things as superfluous eye candy rather than tools that designers can use to create meaning. Yes, I agree and PureData would have been better with some a bit use of visual effects, to improve readability. In that space Max/MSP did a good work, and I generally find that Max/MSP patches are more "readable" than PureData ones, with their mere black on white look with no contrast at all.
I also find that the interview shows that, on the contrary, he actually cares about visual design. He only choose to stick to this very simple and basic visual design but I guess it is very consistent for him. But I agree it would be better if he had stop a bit farer in the "raw <-> eye candy" scale! đ
I have also heard that one of the reason he doesn't want to redesign with visual "effects" is to not break existing patches that "rely" on the old look. I guess something like, if you add thicker borders and inner padding for blocks, the 2D arrangement might change and some patches can become less readable.
He built the patcher as a tool for non-programmers. My whole philosophy is: build a visual tool for expert programmers, where "programmer" is more of a way of thinking than a particular skillset. I totally agree with this philosophy!! 100% of it. In fact it is that kind of idea that makes me want to work on FoC and start my experiments (ok, now, I must find the time to make a TMW video to present this đđđđ...). It was first because block and arrow coding was a pain for some "simple" programing task I know I can manage quickly with text programming.
So his inclination to reach for text and an imperative paradigm feels like a blind spot â and an unexpected one, considering he basically defined how most people conceive of visual programming. At least, the take away of this for me is that he agrees that block and arrow programming with PureData is not efficient for some programming tasks. And that something is missing.
To me, that's terrifying. We might have been better off if Max and Pd hadn't existed, and visual programming had instead been popularized by someone with a knack for visual communication. Yes maybe. But as we all know there is a huuuge load of visual programming environments, trying lots of way to convey "programming" meaning. With very few that are still useful/used, even some with great visual communication. In that space, for me Max/MSP and PureData are apart, they are (and especially Max) some programming environments that lots of non expert programmers are using to build things that are useful to them. They are succesful in that, because there is something in them that "works" and this success is an inspiration for me.
I wasn't checking this slack for some time, but now that I did, I discovered that it passed 1,000 members, really cool! đ
I stumbled upon Progressive WebAssembly Apps (PWAAs) yesterday and thought that they might be of interest for members of this group. It's basically just combining Progressive Web Apps (PWAs) and WebAssembly. The main selling points: ⢠Cross-Platform (runs in every modern browser) ⢠Native feel (mobile system ui integration, add to home screen, offline functionality, ...) ⢠Installation as easy as bookmarking a website ⢠Near-native performance using WASM For me, this feels like it could become the go-to solution for cross-platform apps. I'll probably write a Todo-PWAA (I need a replacement for Wunderlist anyway :P) to test this approach.
Recording: https://www.youtube.com/watch?v=0ySua0-c4jg Slides: https://alexkehayias.github.io/webassemblysf-presentation-20190820/
There seems to be a general trend of browsers adding OS features and OSes adding sandboxing features
Whatâs the code size for a minimal PWAA producing âHello worldâ ?
Konrad Hinsen "The 'Seed' example app uses the seed
framework and clocks in at ~600kb (including ~300kb for an icon and splashscreen), works offline, and can be installed to your homescreen on iOS or Android devices." - https://woz.sh/
Felix KohlgrĂźber Thanks for the pointer! Those numbers look reasonable for the 21st century.
The main limitation imo is storage. Both localStorage and IndexedDB have very low size caps (typically ~50mb) and are subject to random deletion whenever the browser feels like it.
Also the IndexedDB api isn't great for building a decent query language on top eg transactions are already hardcoded in, every method is asynchronous, there isn't a proper seek method.
Overall I get the impression that PWA are still very much built around an online-first model where all your data lives on a server somewhere but you might cache stuff for the occasions where you're offline for a few minutes. It doesn't seem feasible to produce a good offline-first experience on the web today.
Also their approach to durability is:
...there exists a small chance that the entire transaction will be lost if the OS crashes or there is a loss of system power before the data is flushed to disk. Since such catastrophic events are rare, most consumers should not need to concern themselves further.
Jamie Brandon Thanks for your comments, that's an area I haven't looked into yet. For what I'm planning to build, 50mb shouldn't be a problem, but I can see that this would be limiting in other cases (e.g. music app). Regarding persistence / durability, one could argue that smartphones and tablets aren't the right devices for persistent and durable storage anyway. The get dropped, drowned or stolen which makes storing data on them only a bad idea. My current thoughts are that an "end-user home server" and devices connected to it might be the best solution. This would allow for backup, synchronization, availability and scalability of stored data without giving it into the hands of who-knows-whom. But that's another discussion ;-) Back to the point: I could imagine that we'll be able to use WASI within PWAs some day. If I remember correctly, this could allow fine-grained file system access for apps which should fix the current storage problems.
I used to think "the web" was the future of apps. But nowadays I think it's an unsustainably complex Tower of Babel. Are we really going to be building on top of all these poorly-thought-out Web APIs in 50 years? That's a horrifying thought. We should probably keep a couple of APIs at most.
The problem we have nowadays is that we work with code libraries, built upon app frameworks, built upon browsers, built upon operating systems, built upon machine architectures. It's an incredibly complex and fragile tower. I think we should wipe out a few levels and try again.
I agree with Nick, though I think apps-on-demand and cross-platform nature of web apps is definitely a step toward the future
I'm hopeful for something emerging out of wasi. The killer feature of the web for apps is easily distributing sandboxed cross-platform code, so separating that out into a platform that supplies just those things and doesn't pile a bunch of compulsory choices on top seems like a win.
one could argue that smartphones and tablets aren't the right devices for persistent and durable storage anyway. The idea of offline-first apps is that they are completely capable without a network connection, and just use the network for backup / collaboration.
Email is a perfect example - with native clients I can search and write emails offline and then sync up when I'm back online, even though the canonical copy of my inbox lives on a server somewhere. Web-based email never seemed to deliver a decent offline experience.
A side-effect of building stuff in this way is that offline-first apps typically load faster (because the data is already here), are more responsive (don't have to wait for a network roundtrip to eg autocomplete an email address) and are easier for the user to extend/compose (because all the data and code exists locally already).
The future I'm hoping for is we keep the ease-of-distribution of the web but allow building offline-first apps that are actually able to use the full capabilities of the machine. My phone has a 128gb drive. I send/receive ~0.3gb of email per year. Keeping a synced copy really shouldn't be a problem.
Nick Smith I also wished there were less APIs solving the same task, less poor abstractions and especially less tooling (https://hackernoon.com/how-it-feels-to-learn-javascript-in-2016-d3a717dd577f). But on the practical side, cutting down on existing stuff is pretty hard. Who would decide what to keep and what to do with all existing projects using some tech you'd like to get rid of? In theory, I'm totally with you (together with probably most web devs) in theory, but achieving practical improvements is the real challenge here. I Hope that WASM/WASI will be the kind of restart we all want, but let's see.
Another thing is that the web isn't the only platform suffering from growing complexity. I guess that if I wanted to write a native Android app, it wouldn't be much easier than using web tech. The only thing would be that I had to write like 5 other apps for the other major platforms and deal with all the little differences between them. Compared to that, a single web app that can be installed on all platforms and works consistently sounds pretty interesting.
Jamie Brandon Couldn't agree more. I'm a huge fan of offline-first web apps. My last comment didn't mean to argue against that, I just wanted to say that random deletion of data by the browser shouldn't cause data loss. So in the email-example, if the browser deleted data it could be restored from the email-server. This may be inconvenient, but not a big problem. And as long as there's plenty of storage available, why should the browser free up memory? I can kind of understand that browsers won't guarantee to keep the data forever. Extending PWAs to allow unlimited data storage and have some form of UI to check data usage of each PWA would be great.
if the browser deleted data it could be restored from the email-server Only if the server has already synced that data. Otherwise it just deleted some emails that were queued waiting to be sent and undid all the changes to my inbox.
And even if there is no data loss, deleting the data assumes that I'm almost always online and that it's only stored locally as an optimization. I have this problem in practice with eg spotify which occasionally clears all downloaded music for some reason, usually just as we're about to drive into the mountains and out of cell phone coverage. Or when I'm abroad and don't have mobile data.
There was a proposal to allow tagging browser data as persistent so that it won't be deleted by the browser without asking the user first - that would be big improvement if it lands. But I still expect the APIs in general to be built around an online-first model of the world.
This may be inconvenient, but not a big problem. It's not just inconvenient, it's a symptom of an architecture that is based around turning my multi-thousand-dollar supercomputer into a dumb client for an overloaded vps somewhere on the other side of the world and on the other side of a spotty cell connection. Not because it produces a better experience - aside from the ease of distribution most web apps provide dramatically worse UX than the native apps of the 20th century - but because it's more profitable to own the users data. So it really bugs me to see that model becoming not just prevalent but invisible, to the point that people design api's for offline usage that are not usable by someone who is regularly offline.
So many great thoughts about software design, architecture, and open source:
âWe talk about programming like it is about writing code, but the code ends up being less important than the architecture, and the architecture ends up being less important than social issues.â
I also believe that programmers feel latency and it affects their mood even if they don't notice it. (Google has recently done some research in this area that kinda confirmed my belief, here's hoping they'll publish it publicly!) I've been noticing this a lot lately (ironically, slack is one of the biggest culprits). I would love to see that research.
I think the dramatic effect of low latency is one of those things that's crystal clear if you ever get the chance to feel it.
I made http://alltom.github.io/instantaneous-web/ to demonstrate minimal latency on the web: the entire page is just some preloaded static images where I hard-coded the flow for clicking Store > Shop Mac > From $999 > MacBook Air > iMac. Gives us something to shoot for!
MSG NOT FOUND
Hmmm David Piepgrass in general or UI-wise? Check out how jetpack compose does this behind the scenes - https://youtu.be/Q9MtlmmN4Q0 - pretty cheap inserts in the UI tree, if thatâs what youâre looking for. Not sure I understand the question.
But as far as I get it, your logic (observe, filter, map) is not UI related, so the UI could belong to any library , the logic is behind it. You could do that in Kotlin (Kotlin Flow), in Dart (RxDart and Freezed), in JS (Uhh RxJS & Collections.JS if you wanna use a baked solution), most languages have their own implementation.
Ian Rumac I am interested in a general solution for efficient recomputation - UI updates are the most well-known use case, but far from the only one. Compose is interesting... it is demonstrated as a thing for building UIs but I wonder if it could be used for other things. Their Gap Buffer approach might not scale well, but given a different data structure like AList (http://core.loyc.net/collections/) the worst-case performance should be better (though average perf may be worse). However, it does not appear to solve the "filter on large collection" scenario.
itâs not just for UI-s, states and effects are also memoized and reused instead of being recomputed. From your examples, thought you were talking about UI đ
Otherwise, itâs simple reactive programming, itâs just up to you how youâll implement it. Iâd do event-based granular updates to minimize recomputation tbh, if itâs just a list sending transactionable updates is the best way.
Iâve stumbled upon the concept of zooming user interfaces (ZUI). Here are two great examples that show the potential of it. I thought some of you might find this interesting.
The first one is âTable Lensâ from 1994 that allows users to âzoom-outâ to see patterns in large tabular datasets.
video: https://www.youtube.com/watch?v=ZDY9YCYv7z8 paper: https://dl.acm.org/doi/pdf/10.1145/948449.948460
The second example is a zoomable calendar called âDate Lensâ that allows seamlessly browsing a calendar at different timescales.
video: https://www.youtube.com/watch?v=fyWtt_7kYDg paper: https://www.microsoft.com/en-us/research/wp-content/uploads/2004/03/tochidatelens.pdf
Iâm wondering why this pattern is not more prevalent in todayâs software. Iâm not aware of any popular software that supports this kind of interaction mode.
Jef Raskin sure loved the idea of ZUIs (Humane Interface ch.6), but I don't know that the core idea, that our spatial awareness would carry over into a 2D window, was ever really proven out. It hadn't yet been when he first wrote about them, at least.
This is the real ZUI: https://www.youtube.com/watch?v=G6yPQKt3mBA (From another thread: https://futureofcoding.slack.com/archives/C5T9GPWFL/p1588677125006200)
These are awesome, thanks! I know someone in my twitter feed is using this for code navigation but I can't find it and it's forever lost in the feed now.
Zooming is used a fair amount nowadays. You see it in the Atom editor, with the shrunken form of the document on the right. It is used extensively in Fancade, which because it is based on a Isometric projection of 3D space, is quite natural. All graphic design products use zooming constantly. The reason you don't see it used that much in programming is that once you zoom text below 6 pt it is illegible, and therefore useless. Seeing the shape of code does not inform you in any way as to the function of the code. Unlike a bitmap which can be sampled to 1/16th of its pixels and still be quite legible, code basically turns to mush when shrunken. Looking at a blobby bunch of wires may be sexy at first glance, but without comprehension it is mere graphical artifice with no substance or actual productive value. I don't find the Atom shrunken text form on the right is much better than a scrollbar.
Zooming does preserve context better than jumping, and i think we will see more of it, but don't pin your hopes on a UI trick going to advance programming significantly.
Edward de Jong What about semantic zooming? https://www.youtube.com/watch?v=5JzaEUJ7IbE&t=194
ZUI is one of the key aspects of the UI for the upcoming iPad app Muse https://museapp.com/
There's also a great piece about the research behind the design of the app, including the ZUI https://www.inkandswitch.com/muse-studio-for-ideas.html#zooming-navigation
I've always wondered why we've only ever had 2 axes of scrollbars - why not also z
(and even t
)
Yes - doesn't always have to be visible, but doesn't that seem like an understandable affordance to zoom out/in and undo/redo?
I think it's generally not as useful because for most contexts it is very easy to tell "where you are" i.e. how zoomed in you are. It's not possible to tell how far through a document you are in the same way
I think that scrollbars make the most sense for documents that scroll in 1 dimension. For moving a viewport across a large 2D canvas, I prefer a UI that lets you pan in 2D using mouse movements. Like Google Maps, for example. Using two separate X and Y scrollbars to move around is clumsy by comparison. If I want to visualize where my viewport is in 2D space, I prefer a 2D map of the entire canvas that shows my current viewport position, rather than 2 separate X and Y scroll bars. If you add zooming to this (a Z axis), then again I like the Google Maps UI where you use the mouse scroll wheel or the trackpad scroll gesture to zoom in and out, in preference to a Z axis scroll bar. This is what I've mostly implemented in my Curv project.
Curv is missing a "time scrollbar" for rewinding an animation, but I think this is a good idea. It works well in the viewer for Youtube videos.
If anyone here is interested in logic programming, I recommend this new podcast: https://thesearch.space/episodes/1-the-poet-of-logic-programming đ
Now that we know how good ML models are at hallucinating plausible natural language, using them to recognize and complete coding idioms makes sense and I want it! https://kite.com/blog/product/kite-launches-ai-powered-javascript-completions/
For best developer productivity, stay off Slack. đ
Yes , certainly. The other project which I am working on we use it to communicate internally inbetween team members but also stakeholders. It helps our productivity together with other tools like whatsapp and teams. And while we are all working at home because of corona it's crucial I think
I thought this was a slack about it? I figure the future of programming is tightly coupled to productivity :)
Not at all. Productivity is just one axis along which things can be improved. I'd also say that future tools should also better enable people to be counterproductive :)
(Aside: when replying to someone, please always use a thread rather than posting at the top level. That's what :thread-please: means)
Well, breaching the subject - what are your favorite ways to generate/template code?
Iâve found IntelliJâs templating to be pretty neat - altho a bit annoyance, but it can let you create wizards and use them to generate code inside project. Itâs a pretty great boost if youâve got a framework for yourself.
I tried mustaches and annotation procesors, some custom mustache like logic, but everything feels⌠kinda iffy.
----TODAYS DEVELOPMENT COMPLAINTS---- Context: Iâm reimplementing a dragbox select a la: https://simonwep.github.io/selection/ or many other libraries because I need to deal with elements that are overlapping (Iâm using it for dom element selection, but you could imagine selecting items in an outline and youâd have 60% of the hairiness). ---ISSUES IN THREAD BELOW---
a. handling/drawing the box when you drag select/making it disappear on mouseup. this seems like it should really be much simpler, but like with the convivial talk on dealing with SVG for modeling object models ( https://youtu.be/uXv_386TyqY?t=1419 ) the DOM gives you very specific handles on geometry for mouse moves (weird choices off offsets, no other options)
b. wrappers/callbacks for the javascript event model for mice and keyboards. it always feels so, so wrong. I donât know quite how to describe this: itâs like a lego kit where you have to mix chemicals and mold 20% of the bricks yourself. Youâre constantly interrupting every single mouse move or click to check a condition. Shouldnât this be more of a âbidirectionalâ relationship? where you can put the âifâ block outside the mouse move?? I certainly dont personally think âam i holding a knife?â every single time I move my armânot sure if this analogy makes sense.
the getBoundingClientRect API feels like it was designed by a sphinx who lost its job during the recession and got pressured into learning to code
We started this new incubator out of Mozilla in order to work with & invest in developers, startups, and technology enthusiasts who are building things that will shape the internet and have a positive impact without needing to hyper focus on the bottom line. We call this our "fix-the-internet" incubator.
https://builders.mozilla.community/
I wonder how exactly they would define "fixing the internet". I don't really see the common denominator in "Firefox, Wikipedia, Wordpress, DuckDuckGo, Kickstarter, GitHub, Node.js, Ethereum". My best guess would be that it's about privacy?
And to add to that, I feel like the "Bret Victor army" is not quite the demographics they're looking for. Just my thoughts, still interesting! Thanks for the link.
What they are looking for: https://builders.mozilla.community/who-we-fund.html
The future of coding should have a social, networked component. IMO it should be distributed, decentralized & collaborative, and Mozilla is funding that kind of tech.
Lunch with Alan Kay, https://futureofcoding.org/notes/alan-kay-lunch.html: " âYou also need to be embedded in a community of others who have diverse perspectives to bounce these ideas off of.â Alan argued passionately in favor of college and grad school. While he is well aware of its imperfections, he believes itâs still better than an âoral cultureâ or being an autodidact (just following your nose where your curiosity leads you)."
Do you agree that college and grad school are the best way to learn? I think that traditional education system is "below what is actually needed"; there are some fundamental flaws in a system where other people decide for you what you should learn and how (they have neither knowledge nor incentives to decide what is best for you). I understand that we need to stand on the shoulders of giants and not reinvent the wheel all the time, but this is just a challenge of a non-coercive education system, and it is solvable. I think the best solution for education lies in this space, and we should look for / invent it here. We should raise the quality of learning collaboration to much higher level, we should seek mentors and thought leaders like Alan Kay and take their advice and recommendations, but it should be up to us learners to decide what to do with this advice.
What is your opinion?
I agree that it is essential to always to get frequent reality checks and to get your assumptions checked; otherwise it is easy to get stuck in your own world and delude yourself that you are doing great stuff. Graduate school is probably the easiest way to be part of this kind of community, at least if there are frequent seminars, colloquia and discussions with where different viewpoints are present. On the other hand, if you stay in the same university and group throughout undergraduate and graduate school you might also become a bit blind to other perspectives and other ways of doing things.
10 year old with 3 years coding experience starts YouTube channel demonstrating how to use kids learn to code websites
I'm researching the history of extensions in text editors (e.g., like VSCode extensions https://marketplace.visualstudio.com/). I generally consider TextMate (released 2004) as the starting point of an era of text editors built around extensions. That for example makes Sublime Text, Atom, and Visual Studio Code all "TextMate-likes" in that their built around significant amount of functionality coming from share-able extensions. (Note that I'm making a distinction between extensible text editors, Ă la Emacs, and text editor extensions that are easily share-able packages/plugins/bundles, etc...)
Other note-able milestones in the text-editor-extension era are the introduction of Pathogen.vim for Vim in 2008 and Package.el added to Emacs 24 in 2011. Light Table and Atom, released in 2012 and 2014 respectively, are also notable as the first popular web-based text editor built around extensions. Being web-based greatly increased the ease of writing extensions that involve GUI elements.
I'm curious to hear about what others consider important milestones for extensions in text editors? For example, I'd love to hear more about any prior art to TextMate? Before package managers for Vim and Emacs, how did people share syntax highlighting files? And were there examples in other text editors that went beyond syntax highlighting, e.g., did any text editors prior to 2004 have a plugin system that could do more than just add syntax highlighting, like adding commands?
Visual Brainfuck interpreter https://franklin.dyer.me/htmlpage/brainfuck.html