Just finished listening…great!
Some thoughts on the whole “drawing the interface” vs. “building a machine arranging widgets using a grammar”:
- Widgets vs. drawn UI
We used to actually be much more in the “draw the interface” space, basically Views that draw what they have to display. Widgets were added later on for standard interactive elements such as text boxes and sliders etc.
And there was a clear distinction, in that for most applications, the dynamic content (document) would be drawn dynamically using a custom view, whereas ancillary/auxiliary information (inspectors etc.) would be fixed layouts of widgets, which might be dynamically enabled or disabled.
But the whole structure of the UI was largely fixed (dynamic content inside the views + widgets in static layouts).
Although there was a bit of a trend towards widgets, that trend really took off with the Web and the iPhone.
With the iPhone, the content both got more dynamic, partly due to latency hiding with animations, partly due to the small screen making it necessary to hide unused UI, rather than juts disable it, and at the same time our tooling got more static, with UIViews that have dynamic content discouraged and preference given to static layers that are moved in and out and around.
With the DOM, you really don’t have many options except to change the structure if you want dynamic content (Canvas notwithstanding).
So we’ve been moving more and more towards a situation where even purely informational applications display their information via these static/rigid widget sets, at the same time that they information got more dynamic.