You are viewing archived messages.
Go here to search the history.

Mariano Guerra 2023-03-21 09:14:35

Is there a "Grammar of Data Schemas/Constraints" similar to "Grammar of Graphics"? Any schema definition language you find interesting?

Konrad Hinsen 2023-03-23 08:48:28

An interesting question. The grammar of graphics sums up how to present data for good reception by the human visual system. A grammar of data schemas would address which external constraints? There's the "human-readable" constraint, which led to JSON, YAML, TOML, etc. Schemas live in a layer on top of those formats. So what would be useful constraints in that layer?

William Taysom 2023-03-24 07:17:19

Friends, I don't know what to make of developments in AI these days. Having worked on dialog systems in the aughts and having loosely followed developments since (I recall preparing a talk around 2010 which left me pretty enthusiastic about ML applications in contrast to the App-and-Facebookification of "tech" — that was on time horizon of a few years, which ended up being a decade plus), every day I check in on Twitter I see more exciting stuff than I can possibly process. I was just writing someone yesterday about how in six months time, we'll have LLMs acting as the front-end to knowledge bases and rigorous computational systems, and then we'll need to focus on getting the human, AI, and formal model all on the same page.

As has already been noted in #linking-together today, my estimate was off by roughly six months. Consider, "I've developed a lot of plugin systems, and the OpenAI ChatGPT plugin interface might be the damn craziest and most impressive approach I've ever seen in computing in my entire life. For those who aren't aware: you write an OpenAPI manifest for your API, use human language descriptions for everything, and that's it. You let the model figure out how to auth, chain calls, process data in between, format it for viewing, etc. There's absolutely zero glue code"

If you can tolerate his prose, Stephen Wolfram has a long post The "Wolfram Language as the Language for Human-AI Collaboration" section is most relevant to Future of Coding. What do these developments mean for the Future of Coding? And how are you all holding up? Me? I can hardly process what's happening, let alone what to do about it.

Konrad Hinsen 2023-03-24 07:40:07

Recent AI development are almost a denial-of-service attack on intellectual life. Everybody is struggling to keep up. It's almost guaranteed that the immediate impact of all this will be negative - bad AI applications, rushed attempts at useless forms of integration, etc.

It would be great if techies around the world would silently play with these tools for a while before rushing to market with their new toys.

Konrad Hinsen 2023-03-24 07:41:39

That said, I agree about the nice perspective of AI for glue. Or, more generally, for the "outer", most user-facing aspects of software. This echoes the structure of pre-computing work based on good old mathematics: plain-language reasoning with embedded formal systems.

Konrad Hinsen 2023-03-24 07:42:53

Oh, one final advice to techies working on AI integration: wait for Open Source AIs. If you all jump on OpenAI's offerings, you will probably regret it when OpenAI tightens the screws (as it invariably will).

Tudor Girba 2023-03-24 08:10:29

The integration with Wolfram* is certainly interesting. Still, there is a large difference between something that fits on a screen and a significant system.

As long as humans decide what goes into a system, there will remain two challenges:

  • identify a relevant question that holds value. This is a skill that can be built. It’s not about prompt engineering, but about understanding cause and effect in a specific domain.
  • figure out where, how and why a specific solution fits in the system. A completion is interesting but from a system engineering perspective, we still need to evaluate it. That happens to be the main blocker in system development for quite a while now, even without AI.

For both of these, when the problem and the solution fit on a screen, they can potentially be addressed implicitly (picture generation is an extreme case of this: I believe a reason why they are so popular is that people can evaluate them quickly and implicitly). When they do not fit on a screen you still have to evaluate them, but that evaluation can be significantly more expensive. Of course, you can use tools to address that problem, too, and this will raise the next level and so on. Which then leads to a discipline of figuring systems out.

Our approach so far was to pursue compressing the system for a specific perspective, and this turns out to indeed accelerate greatly the ability to reason about systems. I believe this area is in its infancy and I believe there is great potential (both intellectually and as a competitive advantage).

Stefan Lesser 2023-03-24 08:47:28

A few friends of mine and I are in a small private chat group where we discuss tech stuff and forward each other links to articles. You can imagine what that chat has become over the last few weeks. Nothing but AI. (We used to debate Apple’s upcoming headset, which feels moot now; I chuckle at the thought of a product announcement that demoes any Siri-based use case at the moment.)

On top of that, I get articles about AI forwarded from friends outside of my tech bubble. Which is a clear indicator to me that this has a more significant cultural impact than the stuff we go on about usually. That lead me to re-prioritize somewhat, and now I spend quite some time reading about AI, and also playing with it.

As often with technology, we’re at the whim of the companies that push it on us, so it’s partially like a roller coaster ride where you can do little but make sure you’re strapped in properly and try to enjoy it.

Oh, you could choose not to ride it in the first place, of course. But then there’s nobody to talk to anymore, because everybody is on the ride and only wants to talk about it, and about the terrible things that will happen to you at the end of it, where it’s unclear what exactly will happen (part of the marketing that got you on here, I guess) and people suspect the company who built the roller coaster hasn’t fully done all the safety checks (and weirdly there’s no regulation either, so they got away with it).

The good news is, most of us are in the same car (it’s massive, apparently), so take a deep breath, put your hands in the air, and brace for the next inversion… 🎢

(Some people insist that tweeting as loud as you can from the top of your lungs helps you feel better.)

Jonas 2023-03-24 10:14:15

As I mentioned in another thread on here: I'm definitely already thinking about what I'll do with myself in the future - but these considerations can only ignore the wider societal implications of the technology and only ever view AI as a "programmer replacement tech", because otherwise, the system of things to consider gets too complex. So really, the thoughts are worthless.

Also, I don't know if these seismic shifts will come to pass at all/if my considerations will be relevant. Right now it's hard to tell where on the curve of possible progress with transformer based AI we are as well as what capabilities already present in the existing models haven't been discovered, thought of or exploited.

Part of me also isn't entirely sure about the AI safety/alignment talk going on. I'd like more takes by people who aren't directly or indirectly involved with OpenAI, Microsoft etc. in some form. Because these companies and people would certainly stand to benefit from making GPT sound "more AGI" than it really is.

Jonas 2023-03-24 10:17:54

The direction that excites me is LLaMa/Alpaca + Langchain. But the direction that I'm fearing all of this will take (and that I currently use and even pay for, to be honest) is the corporate capture that OpenAI and Microsoft are currently executing.

Jonas 2023-03-24 10:20:01

Also, another problem I have: I was a Machine Learning Engineer in the past, at the height of the CNN hype shortly before transformers hit the scene. And when you're not one of the handful of people doing foundational work/research, I feel like ML engineering is super boring. It's basically - ironically - all glue code, all of the time.

Jonas 2023-03-24 10:24:20

But maybe the curve we're on really is so steep that there won't even be a real transition period where all programmers "have to become" ML engineers for a while, and we're going straight for whatever it is that follows? 😃

William Taysom 2023-03-24 12:52:01

Psychoanalysis is what follows: the art of using dialog to tune the black box. Keeping my own years of observation in mind, I wonder if Bryan Caplan has gauged the situation now correctly, "AI enthusiasts have cried wolf for decades. GPT-4 is the wolf. I've seen it with my own eyes"

What has he seen? "To my surprise and no small dismay, GPT-4 got an A. It earned 73/100, which would have been the fourth-highest score on the test." In contrast, ChatGPT of January did "a fine job of imitating a very weak GMU econ student." He continues, "I wouldn’t have been surprised by a C this year, a B in three years, and a 50/50 A/B mix by 2029. An A already? Base rates have clearly failed me." As far as can see, we're at the start of this curve rather than the end, but I can hardly see anything because the curve looks like a cliff .

🐦 Bryan Caplan: AI enthusiasts have cried wolf for decades.

GPT-4 is the wolf. I've seen it with my own eyes.

Ivan Reese 2023-03-24 15:03:11

I'm also thinking about this a lot, but I'll be brief:

  • This changes how computers interpret our writing, which changes textual programming, addressing many of the things I dislike about it. I think this is going to be a setback for visual programming research, which seems of most interest to those who are dissatisfied with text code. Yet, it could force the handful of people who stick around working on VP to move on from merely wrapping text in boxes .
  • There's surely going to be some way to use these transformers to invent a new visual programming. Very curious what that could look like. Also curious whether _ it will ever happen ._ It's possible that this flurry of excitement over new language-centric (written and spoken) ways of interacting with computers deprives or displaces all the other kinds of interaction for a good long while. Is this the end of direct manipulation as we know it?
William Taysom 2023-03-25 08:54:19

My hope for visual programming is the automation of the UI tediousness that currently limits what we can do. Looking at image gen, prompt engineering is the least good part of the process. Better are tools that allow for interactive, iterative refinement. I’m surprised by how good Chat is at selectively editing an existing text.

Konrad Hinsen 2023-03-25 10:23:24

Thanks Stefan Lesser for describing very well what I meant with denial-of-service attack.

Two additions:

  • The current AI developments are obviously an important step in the evolution of information technology, which is what our focus here is. But from the wider perspective of society, or even just technology, it's a long-term concern. The immediate problems society has to deal with are largely unrelated to AI. That's why I see the denial-of-service attack as so problematic.
  • As long as AI technology implies corporate capture (i.e. as long as we don't have good-enough Open Source AI), I doubt AI will have any positive impact on society.

Conclusion: our most urgent problem is how to protect ourselves against short-term AI damage.

Stefan Lesser 2023-03-25 11:49:40

I said elsewhere that I’m sort of optimistic about all this, because I think it gets us to the tipping point of realizing what we've been doing in tech all along: The main objective of most technologies has been for a while to make a rich person (usually an old white dude) even richer. Positive changes to society were basically happy accidents along the way, and we've accepted a lot of not-so-happy accidents along the way too.

If generative AI transforms business and creative industries as it looks it will, it'll just become harder for tech leaders to pretend that tech is neutral and there's no need to take any responsibility for “a little bit of disruptive innovation”.

I misleadingly proposed it as a naturally following consequence elsewhere, but let me rephrase that as just a hope I personally have: If an AI can do what you can do as well or better than you can do it, than we need to ask ourselves, “What is it that I can contribute that AI can't?” And I personally am in love with that question. For me, it is a tough question, but just the process of pondering it already leads to great places and a much deeper sense of purpose and significance than I ever felt in any tech job before.

Ivan Reese 2023-03-25 14:11:20

AI is inauthentic. So taking capitalism as a given, whenever customers value authenticity you'll find humans doing work that could have been done by an AI.

Stefan Lesser 2023-03-25 15:36:11

I found this video to be therapeutic: (If you dislike iOS or Swift, platform and language aren't really relevant for the point he’s making; I encourage you to watch it anyway.)

It’s good in demonstrating:

  • ChatGPT is far from good enough today — sure, it is likely to improve quickly, but we still have some more time to process this
  • If you really care about what you’re doing, and you are willing to sweat the details, your results will likely be better in many ways, even as AI improves (one of these ways being more authentic, to connect it to what Ivan Reese just wrote)
  • There’s still lots of opportunity to reframe the question and ask, “How can we use AI to support us, instead of replace us?"

It’s up to us how we use these AI systems. Do we want them to automate (and eventually make us obsolete) or do we want them to augment? So far it looks like the same technology is equally capable to do either of these things. If AI is going to replace us seems to at least partially depend on how we choose to use it, what we ask it to do for us, and what results we are willing to settle with.

Andrew F 2023-03-25 20:10:47

So far, people demonstrably prefer low price over authenticity. That might kick in, in the long run (it's the only hope for fiction IMO, but I think it's a pretty reasonable one). In the near term, people will continue to follow the money. The vast bulk of people won't be confident enough in any particular disaster scenario to sacrifice their (very real) short term cost concerns. They're not wrong: no one knows what's going to happen.

I know I'm not fully processing even the full degree of uncertainty about the world. My brain kind of does a quick spin-up/safety shutdown routine when I try to think about it. What do atoms know when a crystal melts? Can they say whether they'll be integrated in the next structure, if/when the environment cools?

But of course there's the part of my brain trying to figure out if AI models can directly output structured data for VPLs instead of text. That doesn't stop.

Ivan Reese 2023-03-25 20:35:46

"How can we use AI to support us, instead of replace us?"

It's not up to "us", where by "us" I mean "99.9999% of people". As usual, the pervading fear doesn't stem directly from the technology itself, but rather how wealthy and powerful ~people~ will use AI to further tighten their grip on the rest of the world.

It's exactly the same dynamic that makes most current human labour invisible and anonymous. I know who made my belt because it was hand-made for me (see: hipster), but I don't know who made my sneakers. I know who made the art on my walls, but not the art on the covers of my books. I know who made the music in my mp3 folder, but not the music in my streaming playlists.

AI is going to intensify existing forces that separate creation and consumption. It's going to turn up the heat by a few degrees, but we're already on fire.

This makes me sound pessimistic, but I think I'm feeling more neutral than anything. My guess is that things will continue to go the way they have gone, at roughly the rate they've already been going, maybe a little quicker.

Andrew F 2023-03-25 20:48:21

I hope so, but I'm afraid you're underestimating the orders of magnitude included on the scale under the heading "On Fire". I'm afraid it's going to be a lot quicker, not a little quicker.

Remember that all the comforting takes about ChatGPT we've been hearing for the past few months are obsolete. Any conclusions drawn on the basis of GPT-3's weaknesses are obsolete. And I bet we're going to start from scratch again before anyone is ready (probably including OpenAI, judging by their appsec record to date).

Stefan Lesser 2023-03-25 22:19:58

To be clear, I was talking about us, the people reading in this forum. We’re not the 99.whatever%. We may not be Musk or Zuckerberg himself, but some of us here probably work for them. Or the next Musk or Zuckerberg could be reading here.

You may not be rich. You may feel powerless. But if you spend your time here, you are likely privileged. You likely work in tech, even if you’re “just” an IC “following orders”. But how ~we~ decide to use AI has disproportionately more impact than those 99% you refer to. I’d say a lot of it is up to us, here, now.

I don’t know what exactly to do about it either. I doubt anybody can. So we could all just agree that we’re all f*cked and maybe there’ll be a chance in the future where we can collectively look back and reminisce in how right we all were that this capitalism thing was ultimately toxic. Or we could try to paint faint pictures of worlds that could be, even if they’re hopelessly unlikely to ever materialize. You know, like we pretend with all that visual programming stuff. :)

Ivan Reese 2023-03-26 01:43:34

We can also choose how (or how not) to use AI in our own lives, for our own pursuits. I'm really fond of Konrad's advice near the top of this thread: use open source AIs. Generalizing: use AI authentically.

Vijay Chakravarthy 2023-03-26 19:08:21

Not sure you’d have that choice. If people around you use AI to get 10x productivity you’d likely have to do the same…