Cool demo, but I think it's another example of too much AI/ML hype.
The most telling example for me was the part where the model made an error in the demo on the "compute_total_price" function.
First, note that it's guided by doc string comments. This very quickly devolves into a classic black box optimization problem.
I can easily imagine someone spending hours tweaking the wording in their comment to try and get the model to generate the right code. How is that better than what we do today where we tweak code or add annotations to try and get the compiler to produce more optimal code? This is worse because natural language is not nearly as structured (The search space is much larger) and the ML model is much more stochastic than an optimizing compiler.
Second, think about when, even after the presenter "fixed" the comment, the model produced code that was almost right, but had a bug. The 80% off instead of 20% off.
The presenter writes that off as no big deal, and in a small toy example like this it is easy to find and correct the error. But can you imagine that situation in a much larger code base? Or even a single moderately complex function? It's well known that reading code (and understanding it) is much harder than writing it. Anybody who has had to fix a bug in a dense legacy code base will tell you, it's way harder to find the bug than it is to fix it. Often, even if you were the original author!
This feels less like "pair programming" from the future, and more like "instant legacy code generation". 😢
I think ML is good in general, but I personally feel like this is a case where more black box magic makes things worse.