I thought this HN post on databases might be relevant to the future of coding: kevin.burke.dev/kevin/reddits-database-has-two-tables
Hmm seems to be from 2010? This structure seems to make sense for smaller teams / workloads
At the other end of scale, the (2013) article about how FB built their graph db on MySQL also hit HN today
engineering.fb.com/2013/06/25/core-data/tao-the-power-of-the-graph
The conclusion I take from this is that the RDBMS they used wasn't geared towards their use case back then, and they had to forego most of the advantages of the RDBMS as a result. It tells me that we need better RDBMS, and I will say that it has become a lot better in the years since then.
@William Taysom For example, I just - as a test - did alter table (redacted) add column foobar bigint;
in a table with billions of rows in Postgres, and it was instant. It's worth noting that schema changes are also transactional in Postgres.
My personal pain point is wanting to have multiple versions of the redacted table that I can modify independently and then merge later. I have managed to do this in Postgres, but it was not pretty.
Come to think of it, I was just invited to the beta of Xata, which has branching as an advertised feature. It didn't quite work for me when I tried it, but it's definitely an interesting goal.
Edit: Turns out Xata only branches the schema, not the data.
They basically re-invented the {Entity,Attribute,Value}/{Subject,Predicate,Object} style one sees in datalog/semantic web databases, but calling the Entity/Subject a "thing". Hopefully they kept improving it after 2010 until they realized they actually want triplicate indexing to allow good performance with complete query flexibility.
I'm quite sceptical of apps/software as the best way of structuring computing. I donât pretend to know which alternatives would be better (and like to think there doesnât need to be a single answer) but the explanations for the success of software and the failure of alternatives always seemed pretty underwhelming, often leaning on the fact that software did succeed as proof of its own . However , finding an argument to be weak isnât really an explanation, even if it suggests the need for one.
Recently I was wondering if there is explanatory power in âConways Lawâ if applied to large socio-technical systems. Conway was thinking about the internal structure of software as produced within an organisation in How Committees Invent. The paper has much more nuance, but the adage roughly states:
Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure
If we apply this line of thinking to computation at large, we might ask questions on the impact of communication mechanisms inherent to capitalism and their structuring effect on the computing landscape. This maps quite well to software, as there is often a correspondence between software production and individual firms. We might imagine a different economy (e.g. some highly federated, anarchist society) and think of what the organisation of computing might look like. This is something I think is interesting to explore on its own, but it lead to a different idea I find quite compelling:
That the success of software is a result of a drive to make computing economically âlegibleâ to capitalism.
(Iâm appropriating legibility here from James C. Scottâs Seeing Like a State)
This makes some sense as an explanation for software, but it might also offer a new explanation for things like app stores among others.
[This is a slightly rehashed line of thinking I passed around at work that I thought might be interesting to this crowd. This isnât thought through, so just read it as casual speculation. But I do think we need an explanation, if we are ever to move beyond this current way of interacting with digital systems]
Oh also, Iâd love to get a sense of wether the question of âwhy software succeeded / why alternatives didn'tâ even parses or seems like a useful one to ask.
I have been thinking about this as well, and I also happen to be a big fan of Conways law as an explanation for design architectures, software or other.
However, what I see as the social foundation of the package-based structure of today's computing environments is not capitalism, at least not directly, but the concepts of intellectual property and legal responsibility. A software package is a unit that is owned by someone (person, company, OS community, ...) and may create liabilities for their owner.
For a different architecture, consider (human) language. There are ownable units as well (books, articles, etc.), but nobody owns English or Chinese. The evolutionary processes are thus very different than for owned units.
If I understand you correctly, it's not "software" in general that you're asking about, but separate, siloed applications each living in its own isolated world.
I suspect part of the cause is historical, since personal computers couldn't handle more than one focused task at a time for over a decade, which was long enough for a way of thinking to become deeply entrenched.
I also suspect that part of the issue is security. Sandboxing each app by itself, like on iOS or Android, is much easier than allowing untrusted code from multiple sources to all coexist in the same object space, in the Smalltalk or Open doc or NewtonScript sense.
I think we could learn something from multiuser games here, since they've been dealing with the issue of letting untrusted bits of code from multiple sources interact with each other for a long time. Second Life, LambdaMOO, LP Mud, and object capability systems all seem like potential sources of inspiration here.
BTW, I am trying to encourage a different model for my digital scientific notation (github.com/khinsen/leibniz-pharo). No idea if it will work of course. I plan to describe a recommend a "no reuse" policy. If you want to work with someone else's code, you copy it into your own project and adapt it as necessary. This is how people have worked with scientific models and theories for centuries, with good results, so I see no reason to change just because the knowledge becomes digital.
BTW, as with languages, scientific theories are nobody's property and nobody's responsibility.
Konrad Hinsen yeah I totally agree there, itâs part of what I meant by legibility, as software provides a structure that is amenable to ownership, IP, markets, etc. I think your framing might be a more effective one. As for the language example, thatâs actually exactly the counter-example I used at work! Iâll definitely fold ownership or âownabilityâ into future discussions.
The reason I like âlegibilityâ is because it doesn't point to a specific economic concept like private property, and so might translate more easily to analysis of other economic systems (or variations of our current one). Hence the federated/anarchist example.
@Personal Dynamic Media yeah thatâs mostly what I meant by software. And yeah, historical circumstance definitely plays a big role!
Orion Reed You might want to frame it as "apps" rather than "software", as the latter word has a different (and well established) meaning that will trip up some readers, like PDM and myself. But yes, the silo'd apps in an app store model is... not great.
On a similar note, I'd say "widely adopted/used" (or something similarly concrete, whatever you actually have in mind) rather than "successful".
I think legibility is an interesting perspective, but I don't know if you need to invoke capitalism. End users appreciate legibility, too. It's a lot easier to say/think/believe "I installed Instagram" than "I picked out and copy-pasted/linked to/configured one of several pieces of code, each of which has slightly different interfaces, that lets my computer connect to one of several decentralized image sharing services, each of which have several entry points that might link to different subsets of the network..."
So if you want to get rid of "apps" or whatever big chunk of code/assets you're thinking of, legibility for the end user is something you need to think about. One of the goals I've kept in the back of my mind is for end users to be able to install and uninstall something like a layout and attendant keyboard shortcuts, macros, etc for a complex program. I'd like there to be something of a spectrum between that and installing the large program itself. But there needs to be a "thing" there, with a name and a handle to manipulate. Otherwise you start feeling like grime is accumulating in corners and seams of your system that you can't see.
To be fair, even "apps" are an improvement over the unix model of a script that splatters files all over your system.
Trying to compare to an economic model different from the US, but one that already exists, I was wondering how much of a role the Chinese government has in enforcing software interoperability, since about half of their economy is state-owned. Looks like they have an initiative called âChina Standards 2035â thatâs partially about coordinating technological standards. But as you can read here itâs also just about economic modernization and isnât that heavy-handed. For context, their government is also pretty decentralized (which should be distinguished from how authoritarian it is).
Extremely interesting conversation.
To understand tech in China, I cannot recommend The Great Firechat enough greatfirechat.com. It just so happens that Kevin and Adeh started the podcast shortly before the Chinese Communist Party introduced major changes to how it regulates tech. See also Kevin's brand new book about WeChat amazon.com/First-Superapp-Inside-digital-revolution-ebook/dp/B0B1Q7NYN1.
To sum up the China tech antitrust situation... With consolidation, like one dominant rideshare company (DiDi) and two dominant digital payment systems (Alipay, WeChat Pay) especially with Jack Ma touting how payment data was enabling Ant Financial to outcompete any bank at assessing credit worthiness, the Party felt it was high time to intervene. No one institution should have too much influence on the Chinese public â except the Party itself. So IPOs have been blocked, Ant Financial was nationalized, apps can no longer collect more data than they need from people. What else has happened recently? Online tutoring is no longer a thing. A huge industry dissolved in about a day. See also Hong Kong's legislative council being reorganized after it became clear that the existing electoral system would have produced a majority critical of the Party.
The antitrust angle is interesting. Or in general looking at what effect consolidation has on software. I wonder if thereâs anything out there about what happened to the old AT&Tâs code after they got broken up. Iâm sure they had to maintain standards, but did they just fork the internals?
Legibility is indeed an important perspective. It is ultimately about different scales of implication/engagement with something. There are "locals" and "outsiders" (for software: developers and users), with legibility being the ease of access for outsiders, even if that incurs a cost for the locals.
Human language is defined by locals (speakers) and makes no concessions for outsiders (everyone who doesn't know the language). They have to deal with whatever is there, or use services (translators/interpreters). Legibility is low.
Scientific theories have locals (scientists) and outsiders (everyone else). Like language, they are defined by and for locals. Outsiders have to learn the science, or use services (science journalists, ...). Legibility is low.
In a complex society, there is room and need for both approaches, and probably an intermediate one that remains to be discovered. What I think matters is awareness of the trade-offs, and that's not yet very well developed.
Open Source is a nice illustration. There are OS projects that deliver industrial products just like software vendors do, only with a different commercial model. Firefox is a nice example. Legibility is high, but locals (developers) don't profit from their work other than through income. There are also OS community projects, with communities seeing themselves as working together on a shared asset, i.e. by and for locals. But many of them (the well-known ones: Linux, Python, ...) are much more about a legible product for outsiders, in return for mindshare/idealism/fame/whatever. That doesn't work out well for those locals who believe in the community story, and we get maintainer burnout as a result.
One more remark: since the beginning of industrialization, Western societies have more and more emphasized legibility and products/markets (i.e. anonymous exchange), to the point that today we measure collective well-being by the amount of anonymous exchange (GDP). Individuals and communities solving their own problems don't contribute to GDP, so their work doesn't count as valuable. That's clearly a mistake in my opinion. As postmodern thinkers correctly point out, legibility lays the basis for power structures that do a lot of harm (James C Scott's book "Seeing like a state" is the classic on this topic). On the other hand, anonymous exchange does produce a real contribution to well-being. It just shouldn't have a monopoly.
I'm not at all convinced legibility is entirely a matter of scale of interaction. It seems perfectly possible to have a system that's illegible to all involved. International intelligence comes to mind, mainly because I've been reading John le CarrĂŠ lately, but various kinds of computing systems also tend toward total opacity if not carefully managed.
Hey all! Iâm looking for a bit of help solving (what I think is) a really simple problem, but I just canât seem to find someone - or the right article - to help me figure it out. Itâs a simple graphics vector problem, and my inexperience in the space is doing me no favors in abstracting the problem to do the right maths. Does anyone here happen to know a good place to ask technical questions in that space, or, would anyone happen to know of friendly soul(s) with time to walk through the issue? StackOverflow and the dregs from days of search engine digging is chock full of answers that revolve around but donât tackle exactly what I need, and the one solution I have found doesnât seem to apply to my current state of code.
I consider help like this of the highest form of validation and âraising upâ, and would happily compensate said helper or producer of article in any reasonably requested way â¤
Here is probably quite good. There are mathematicians here, and plenty of graphics expertsâŚ. Is it easy to explain?
I think it is, yeah. It really seems so darn trivial, but I just⌠canât get it! Let me type it out, and if itâs helpful, I can upload some of the sketches I have trying to break it down. So, hereâs the gist:
Keywords: Texture / glyph atlas, macOS, Swift, Metal, iOS, vector space transforms
Given: Iâve got a set of âverticesâ (x, y) in normalized space, from (-1, 1) on the domain and range. Just a simple cartesian space. Iâve also got a set of âtexture coordinatesâ (u, v), which instead map to (0, 1) on the domain and range, with (0, 1) on the range an x-axis flip. This is apparently really common, and itâs the way the Metal shading language defines its coordinate spaces.
I also know how to convert from vertex-space to uv-space:
u = (x + 1) / 2
v = -(y - 1) / 2
Also given: I have a texture in UV space where I want to define arbitrary rectangles. These rectangles are glyph-stamps in the [(0,1), (0,1)] bounding box range. These are easily computed given the full size of the texture I am creating rectangles from.
The problem: I have a single set of vertices that define a quad. Top left / right, bottom left / right. These are always centered around the origin. Meaning, I can have a square that fills the vector space with these coordinates:
(-1, 1)
(-1, -1)
( 1, 1)
( 1, -1)
Assume we will âresizeâ those in some way by symmetrically bringing in the sides as needed - again, all around the origin.
Given that I have those precomputed coordinates above which map to UV space, how do I take those arbitrary vertices in vertex space and map them to the ârectangular sectionsâ of uv-space?
Not sure I followed that completely. Surely all the mappings from x,y space to u,v space will be given by your formulae or their inverses. Ie x=2u-1;y=-2v+1.
Bit unclear about your use of the words âdomainâ and ârangeâ in this context.
You asking the question in that way is helpful. Yeah, the inverse does give me the expected vertex for a given uv coordinate, but the issue is that inverse should always map to the same vertices.
This is a terrible sketch, but let me shame myself anyway:
đˇ Untitled 5.jpg
Given the yellow vertices, I need to âtransformâ them to the matching UVs such that it âfills their spaceâ, and the rasterizer can interpolate between the values.
No. Sorry you've lost me. Now I'm unclear why points on your diagram are marked as apparent products of u and v, and why the other points have only v on them.
You said earlier there was a square centred on the coordinate centre doing something. If this is some transform of one of the coordinate systems about the centre point I'd expect that scaling factor to appear in front of the u and v (or its reciprocal) in the relevant formulae.
Otherwise I'm baffled, and since itâs gone midnight here I'm hoping to see a lovely solution to your problem when someone picks up the story from another time zone while I'm asleep⌠đ
Youâre wonderful for your thinking and time anyway, and no worries friend! Itâs rough when I canât even explain the problem well, lol. That probably means if I dig in the question better, Iâll get closer to the answer implicitlyâŚ
(I think present-company. is probably the channel for this. From the member handbook "If youâd like help with your homework, this is the place to ask." Not that this is necessarily homework. But anyway.)
Is the question how to define a transformation that sends four coordinates v1,v2,v3,v4 to the four coordinates (0.25,0.25), (0.25, 0.75), (0.75, 0.25), (0.75, 0.75)?
Iâll happily remove this and pop it over there. I didnât realize that was a line in there - my apologies for the break.
I think thatâs my question, yeah⌠The target coordinates will vary, but thatâs the idea. And the reason being that the same vertices (v1-4) will always pipe through the graphics pipeline, but the vertices for each constant that defines the instance being drawn will be unique. I want to take an input vertex which is at one of those coordinates, and say, âThis coordinate is at ___ u,v position within the bounds given by the constants that define this instance)â.
I think I havenât define the problem correctly, because the more I write it out, the more I think something is missing.. Iâm just not sure.
I'll describe how you can map vectors v1...v4 to vectors w1...w4 assuming they label the edges of rectangles in the order top-left, bottom-left, top-right, bottom-right. The mapping can be expressed as the composite of three simpler maps:
(1) Translate the original rectangle to the origin. This is achieved by the mapping sending an arbitrary vector x to x-v1.
(2) Scale the resulting rectangle so it is the same size as the rectangle w1...w4. In the horizontal direction you have to scale by |w3-w1|/|v3-v1|. In the vertical direction it is |w2-w1|/|v2-v1|.
(3) Translate the resulting rectangle from the origin to w1. This is the mapping sending an arbitrary vector x to x+w1. Composing these three maps will map v1...v4 to w1...w4.
I hope this helps. I don't know graphics programming, but if you have any other math questions, or need some more detail, I'm glad to help.
This is quite helpful Robin, Iâm going to save this snippet because Iâm going to be scaling nodes soon.. and this is exactly the math I think Iâll need. I sincerely appreciate it.
I'm personally quite happy to see this thread in this channel. I'm happy to give long-time members the benefit of the doubt that they're working on some sort of FoC project rather than extrinsically motivated "home work". It's not like there's a lot of thinking together going on here anyway đ
I love to read it Kartik, haha. This is something absolutely plan to share with FoC - as soon as I have it working. Iâve been making leaps all this week, and Iâve been reaching out more now Iâm getting closer to a wrap up of this next step.
u want a simple matrix that maps between the two vector spaces (which you can invert if you need to go the other way). A simple 2x2 matrix can;t do translations, which is what homogeneous coordinates solves. Instead of a point (x,y) you make them (x,y,1) and multiply by a 3x3 matrix which has 0s in certain elements. That extra 1 on the end is leverage by the transformation matrix to move the origin. See web.cse.ohio-state.edu/~shen.94/681/Site/Slides_files/transformation_review.pdf
I would say you don't want to break the transform down into translating then rotating etc. It's all covered by vector-matrix multiplication, and it's far more composable and effecient (and idiomatic graphics programming) to stick to the matrix representation of transforms.
here is a very concrete example in practice that matches your problem math.stackexchange.com/questions/296794/finding-the-transform-matrix-from-4-projected-points-with-javascript