You are viewing archived messages.
Go here to search the history.

Tyler Adams 2021-03-17 16:16:24

I had a shower thought about pushing to prod, wanted to share it with FoC.

Pushes to prod make no sense except for front end. A push to prod is basically FORCING EVERYBODY ONTO IT NOW. You used an old version? Too bad it's gone.

For front end, I get it, no human is going to visit http://v1337.facebook.com. But backend, where clients are programs? Makes no sense.

Why don't we use proper dependency management? Push a new version to prod. Let clients migrate manually to that new version.

Want to break backwards compatibility, go for it. Push first (and use semver), fix clients later.

If clients want the latest and not have to upgrade manually, let them use a symbolic version "latest." Just like the current system, only opt-in.

Why can't we have this world?

Kartik Agaram 2021-03-17 16:27:03

On the front-end side, this article has been thought-provoking to me for years: https://www.ribbonfarm.com/2014/09/24/the-rhythms-of-information-flow-pacing-and-spacetime

Mariano Guerra 2021-03-17 16:35:51

because now instead of maintaining one version you are maintaining all releases you ever did, when there are problems or reports you don't know which versions they come from and it may generate a combinatoric explosion of reasons, you have to backport/forward port all fixes since reusing across versions may introduce issues if you refactor and introduce a bug (version 123 before a refactoring is no longer version 123)

Mariano Guerra 2021-03-17 16:36:16

REST supports version negotiation, nobody ever used it, we can barely maintain the current version

Mariano Guerra 2021-03-17 16:39:11

what do you do when dependencies break/bitrot/get security issues or stop being supported by the host? Performance improvements only work for users on the last version, have to keep support for all schemas and file formats in parallel. Monitoring is much harder to understand, you may get some performance issues when someone does something in some older version

Mariano Guerra 2021-03-17 16:39:31

I'm not saying it wouldn't be nice, just that we need to change almost everything we do and how we do it to support it.

George Campbell 2021-03-17 16:40:37

we did something like this in production with jboss modules in java. dynamically loading jars on request.

Tyler Adams 2021-03-17 16:44:31

Mariano Guerra Re: maintenance, this is the same problem every library developer has and they publish versions, not "latest" only

Re: performance, if we treat separate versions as separate services then we don't have conflation issues. If we want to combine stats, we can always combine them.

Chris Knott 2021-03-17 16:46:55

Cambria from ink&switch follows through on this idea

George Campbell 2021-03-17 16:50:15

to give you an idea of the scale of the one service has 700 active and 260K inactive versions collectively doing about 75K rps globally.

Tyler Adams 2021-03-17 16:51:35

Was it worth it in practice?

George Campbell 2021-03-17 17:00:55

the system got too complex to run locally and meant developers couldn’t debug locally anymore. We’ve switched to a docker based system but it still cumbersome. humans are bad at cleaning up after themselves and we had to build lots of tools to track down underutilized (different than unused) to clean versions up as fast as they were making them.

Mariano Guerra 2021-03-17 17:17:10

if you treat separate versions as separate services, which makes sense, now your server billing, maintenance, operations and monitoring costs/time increase with each new version. Also, not only support and issue tracking, documentation, tutorials, how tos, screenshots, videos have to consider all versions someone is using. I've maintained two major versions of the same product in production for a few customers and it's not fun.

Mariano Guerra 2021-03-17 17:20:36

I've had many cases where you get an issue in a dependency, report it and the maintainer (with good reasons) tells you to upgrade to latest major since the version you are reporting is no longer maintained

Tyler Adams 2021-03-17 17:33:39

Billing/maintenance/ops/monitoring is different from standard dependencies. I suspect these are long term automateable problems (i.e. serverless), but they are real today. Everything else sounds the same dilemma as standard dependencies though?

The story of reporting an issue in a dependency and being told that version is deprecated and to upgrade sounds good to me. It's better than being forceably upgraded and the old one disappearing.

Tyler Adams 2021-03-17 17:35:49

@George Campbell Too complex to run locally because there were too many services for a computer to run? It's a fair point that running an http server is more involved than installing a package to disk

Ivan Reese 2021-03-17 17:39:55

Datomic is probably a great analog. When used as intended, you never do away with your old data. All data is stored with an explicit sense of time. (Kartik Agaram's linked article was good!) You can have many servers reading the database, and the each have a temporally-locked view of the data. Your data will never unexpectedly change out from under you.

Datomic works great when you've designed a whole system around the way that it works. It's not a drop-in replacement for Postgres or Mongo.

Here's another example — Basecamp (and Highrise) have their "until the end of the internet" practice, where users of old versions of their products will not be forced to upgrade. My company is still using the original Basecamp, which is 2 or 3 major reinventions old at this point. It still works great for our needs, in ways that the newer products wouldn't.

It's totally possible, and even practical, to keep old things alive when new things come into existence. You just need to design with that goal in mind, and that probably will demand confronting and rebuking some established practices and tacit assumptions. (In other words — what Mariano said, just with less here are all the things that would be different and more here are places where we already do this and it's fine/good.)

Mariano Guerra 2021-03-17 18:25:59

maintaining a small number of major versions is doable, maintaining all versions or at least say 10 versions is another thing

Mariano Guerra 2021-03-17 18:27:27

maintaning some major versions is almost the same as maintaining a family of products, easier if you don't have to keep backward/forward compatibility, a little harder if you have to, but still doable.

Mariano Guerra 2021-03-17 18:29:34

each version adds extra maintenance overhead, if it gets "frozen" the overhead may be small, but each extra version adds extra overhead on top, so you have to see if it makes sense.

Mariano Guerra 2021-03-17 18:30:53

guix, containers, cambria and unison may make it easier

Mariano Guerra 2021-03-17 18:31:59

just an example, the weather widget on my OS stopped showing weather forecast for the following days, it seems the external api broke/changed, they didn't change anything, yet it broke

Mariano Guerra 2021-03-17 18:32:45

so it's not only your code, dependencies and environment, but all external systems too

Kris Pruden 2021-03-17 18:38:55

TLDR: to me this is a pretty simple cost-benefit tradeoff. The reduction in regressions/confused users is rarely worth the cost (imo).

(Others have covered some of the points I make here as I wrote this, so apologies if I’m repeating anything)

For context, I cut my teeth in the bad old days of on-premise enterprise software. Our software was mission-critical, and it was incredibly difficult to entice customers to upgrade to new versions. For any given customer, the upgrade process could take a year or longer to work itself out. We probably had at least a half-dozen officially supported versions, and a handful of customer-specific releases on top. The decision-making process for determining what changes went where was a nightmare, to say nothing of the actual implementation. Our release manager was super-human. I never want to go back to that world again :)

Version compatibility is a pretty well understood problem, but supporting multiple versions simultaneously carries some pretty significant engineering and operational overhead. Running multiple versions of application logic is pretty straightforward, but there’s added complexity in request routing.

Technically, it shouldn’t require more compute capacity to support multiple versions of a service, since each user can presumably only use one version at a time, but in practice redundancy means you’re going to need more infrastructure to support multiple versions for the same user base. So that adds more cost.

Each live version of the code adds operational complexity of monitoring and diagnosing issues. Two or three versions might be manageable. Ten? Impossible (imo). Each live version increases complexity linearly if not exponentially. Continuous deployment means you’re pushing dozens of versions each week. There’s no way a team could stand up separate instances of each of these and maintain their sanity. Although the canary process does resemble this motion, it’s only managing two versions at a time, and for a limited time frame.

The temptation then would be to only stand up new instances for “breaking” changes. But it can be devilishly difficult to decide what a breaking change even is. For example, consider an enumerated data type. Simply adding a value to an enum is technically a breaking change, because existing clients can’t be guaranteed to know what to do with it. And that’s just on the API interface side. At least there you can do static analysis to detect breaking changes, although the tooling needed to do this adds its own overhead. Many breaking changes manifest in behavior or semantics, which are impossible to detect statically, and can be very difficult to detect with testing or human reasoning. So, the decision of whether a given release warrants a separate instance becomes a risk management exercise, which can be costly in its own right.

Planning gets more painful too with each live version. Security or critical bug discovered? Get ready for a lot of painful overhead and difficult conversations deciding to which versions the fix should be backported. Depending on how much drift there is in the code from one version to the next, it’s often not even obvious how to backport a fix.

Finally, these challenges often come to a head at the data layer. What happens when a new feature or bugfix requires a data migration? Unless you’re willing to maintain separate databases as well, it’s often practically impossible to support multiple versions, and even when it is, it adds yet more developer overhead.

There are solutions to all of these problems, but in my experience (over 20 years, for what that’s worth), it has never been worth the effort. While not exactly easy, it’s generally much more practical to adopt a posture of runtime compatibility with all live clients, with a multi-phase rollout process in the rare case where a breaking change is unavoidable. If you have a good client upgrade pipeline (browser-based client or app-store) this is pretty manageable.

Note: Ivan Reese makes a great point re: Datomic/versioned data. Everything I said is based on the typical “mutation-friendly” architecture. It is probably possible to architect a service to accommodate many live versions practically, but this would have to be a fundamental design goal from the beginning. I’m not convinced there’s away to completely avoid at least some of the costs, however, so imo there’d better be a pretty good business reason to adopt this goal.

As we move forward and more of the software-using public becomes accustomed to and expects their software to keep improving, I’m not convinced there will ever be more than a sliver of the user base that wants to stay on older versions. The question is: how much is supporting this minority of users worth to you?

elvis chidera 2021-03-20 08:14:29

Are there good open-source projects for building the UI editor interfaces found on most no-code tools (like http://retool.com, http://glideapps.com, etc)?

Components on one side, properties on the other and content in the middle. Drag-n-drop components, etc. Like the attached image.

Thanks.

elvis chidera 2021-03-20 08:27:01

I have found one good one so far: https://github.com/penpot/penpot

Although this feels like a heavy design tool I would have to repurpose somehow or something.

Chris Maughan 2021-03-20 12:25:09

This looks a bit like the excellent balsamiq, but that isn’t free, and is for prototyping UI

Mark Probst 2021-03-20 13:46:35

We built it all ourselves at http://glideapps.com, FWIW.

nicolas decoster 2021-03-20 18:21:17

This is also something I'm looking after.

After quick search I found: https://grapesjs.com/ that generates HTML/CSS and https://github.com/Pagedraw/pagedraw in the React world (but maybe not actively maintained).

elvis chidera 2021-03-21 11:13:20

@Mark Probst really awesome job from the team -- from a random fan (me).