The fantastic magit mode in emacs (git integration) has text-based keyboard-driven menus that you interact with either by typing single letters (as shown in the screenshot) or by moving the text cursor over as specific option and pressing enter (which either takes you to another menu, performs an action, prompts you for some input or cycles a setting). The mouse also works, but its use is extremely rare. Is there any UX research into making GUIs keyboard-driven to the extent that the mouse becomes obsolete?
What is the best video editing software? I have an idea for an async way to do zoom conferences, and want to make a single page mockup, but need to copy the UX of the best video editing software out there
FWIW I’ve had good experiences with Camtasia. But not free.
Recommendations for the best CAD software for a condo renovation project?
For casual use, SketchUp is decent. We designed our house using that tool (and then handed the plans off to the builder, when sent them off to the people doing the actual engineering drawings).
SweetHome3d - looks dated but holy damn it's good for visualising stuff
Cool, a web version: http://www.sweethome3d.com/SweetHome3DJSOnline.jsp
I've been thinking about an alternative approach to "function calling" (particularly, for declarative languages). Instead of having functions, where every invocation must explicitly feed in every argument, what if:
let f = x * x
(note the absence of a parameter list) and then later write let x = 3; let y = f
, which sets y to 9.It's important for the substitution to be lazy, since this retains the semantics of traditional functions, including the ability to make recursive definitions. Given this, I don't believe I'm re-inventing macros, though I welcome enlightenment.
You can combine this with the usual nested syntactic scoping rules, so it's definitely not the same as using global variables, or the same as closures (where all the free variables are bound at the definition site). The only practical difference seems to be the UX, and the biggest UX downside is probably readability (which invocations consume which values?), but I'm confident an IDE can make the data flow more explicit.
Advantages of this approach:
color
just once, and it will be automatically consumed by all drawing functions (i.e. expressions-with-free-variables) that are invoked in that scope. This actually obviates the "type classes" feature that functional languages often pursue. (Type classes aim to enable exactly this kind of "implicit argument passing", but achieve it in a more convoluted way.)color
variable used in a drawing library should be the same across all drawing functions.I'd love to hear others' thoughts. Has this been tried before? Are there additional advantages or disadvantages that I've missed? Is it worth a try? Note that I'm thinking specifically about declarative languages here. Imperative languages may add complications (but may not!).Is this the same as unhygienic macros? (I love unhygienic macros, but I haven't tried to program with just them.)
It shares similarities with macros, but you should still be able to define recursive functions using this approach, and thus substitution needs to be able to occur "lazily". I'm not sure if there are macro systems which can do that; I'm not very familiar with macro systems in general.
I added the word "lazy" to the original post.
Interesting. One phrase for literature surveys may be "lazy call by name"
Edit: Apparently call by name as defined in the literature is always lazy: https://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_name. So this may be subtly different in some way I can't brain atm.
this is how our english stuff works, since in natural language you often don’t refer to arguments if they’re obvious
the important part is that we show you our explicit interpretation and allow you to correct it in case we picked the wrong thing
There is some similarity with APL trains, hooks and forks, IMHO. Unless I’m not understanding this correctly.
There is a bit of a learning curve with APL trains. Note that the analogy to English holds - for most people English has a few years long learning curve going from characters to words to sentences.
Wouldn’t this be a problem where f = xx and f = yy are not equivalent?
Are you talking about comparing the two expressions under an equality operator: x*x == y*y
? Equality of functions is already undefined in most programming languages.
If you mean that swapping one invocation for the other does not yield the same program, then that's true, but I don't think that will be a problem. The user of the function just has to indicate which variable they want to bind at the invocation site.
And you can always rename the variables by saying let x = z; let y = z
if you want a set of functions to all use the same variable name, post-hoc.
I believe the point might be that constraints on the names of variables could affect composability. If you want to call f
you have to have a variable called x
. But now say you have another function that needs y
for the same variable. Or x
for a different variable.
None of this is insurmountable, but might add noise.
I think my above comment + my second dot point under "Disadvantages" addresses those things 🙂 (I just extended the dot point to add more detail)
Ok, maybe. Capture typically involves reasoning about a single call in isolation. I'm thinking about situations where each call is fine in isolation but there's impedance in putting them into a single block. So you end up creating extra block boundaries.
Ah, I see your edit. Yeah, that's basically the impedance mismatch I was alluding to. Not a huge deal. Typically when we worry about 'capture' we're concerned about insidious bugs. This is more a case of fixing a new kind of syntax error.
I have to deal with some additional constraints akin to these in Mu. It's a cost, but sometimes it's worth paying.
This concept is pretty similar to dynamically scoped variables. e.g. emacs lisp:
(defun f () (* x x))
(let ((x 2))
(cl-assert (= (f) 4)))
See Richard Stallman’s paper about emacs on why dynamic scoping promotes modularity: https://www.gnu.org/software/emacs/emacs-paper.html#SEC17
Worth noting that while most programming languages use lexically scoped variables, a variety of programming frameworks include dynamically scoped abstractions. Most notably React contexts.
Yeah, this will probably "feel similar" to dynamic scoping, though importantly, you can still determine which bindings will be used by an expression using purely syntactic reasoning. I'll read that paper, thanks 🙂 (edit: The arguments Stallman makes for dynamic scope are pretty weak, IMO, and don't overlap with the arguments in my proposal.)
i could see problems with accidentally reusing a variable name and causing unintentional problems.
what about nested for loops
for i in ...:
for j in ...:
# eval a function that expects an index to be assigned to i but you want it use j.
I cover that in "Disadvantage" dot point 2. In short: you wouldn't ever be using "i" as a parameter name. You'd be using a unique UUID, and perhaps you'd use "i" as a human-friendly label for it, to be displayed in the IDE.
Also, I didn't explicitly mention that I'm thinking about declarative languages (I always make this mistake). It may be the case that imperative languages have specific qualities that introduce additional problems.
Yeah, I think I missed how your proposal differs from dynamic scoping Nick Smith.
Well, if you put a function definition within another function definition, the inner function should still be able to "use" variables from the outer function, i.e. binding the variable in the outer function might also bind it in the inner function. In other words, you can choose between lexical scoping and dynamic scoping on a per variable basis. Indeed, with every variable usage, you need a means to indicate which parent scope you want the variable to be bindable from; essentially you want every scope to indicate its "parameter list" (free variable list). This puts you somewhere in-between lexical scoping and dynamic scoping. It will require a smart syntax to be comprehensible, of course! It would work similarly to how "quoting" works in Lisp or Julia.
To me this proposal sounds a lot like term rewriting, except that typical implementations of term rewriting include more sophisticated pattern matching.
I've considered very similar ideas, down to the example of graphics/color and idea of library-specific symbols. You might also look into algebraic effects as an underlying formalism for variable lookup. (That should still let you pick which variables are dynamically or lexically scoped, but it's too late for me to be confident). Also effects are just really cool in general, which is why today I lean towards using them to handle this use case by-the-way.
Konrad Hinsen Does that perspective yield new insight? I thought all language semantics could be expressed as a term rewriting system.
Nick Smith In the end, everything that's Turing complete is equivalent. But the UX is very different. The particularity of term rewriting as compared to more popular functional computation frameworks is that it focuses in terms, which represent data and function applications, over rules, which represent the transformations done by the functions.
Yeah sure. I guess term rewriting might be a useful way to phrase the semantics for the purposes of explanation.
Hi Nick - have you looked at implicit
s in Scala? It allows code quite similar to the example let y = f
if f
or its parameter is marked as implicit
. implicit
can be used at the value and at the type level. At the value level - they are mostly used to reduce boilerplate; for example; passing around a db connection throughout the callstack could be done by marking it as an implicit parameter; and then not having to explicitly pass it on a function invocation.
At the type level - it is used to support a number of the advanced Scala language features.
I would say its a very actively used feature of the language. Scala has a new major release coming in which a number of the lessons learned from Scala 2 are being used to refine some areas of the language design and implicts
are getting quite a bit of a rework; so could be an interesting case study.
Yup! I’m broadly familiar with Scala implicits (but of course, thanks for suggesting 🙂). I put them in the same bucket as “type classes”, which I mentioned in my original post. Is there some benefit of Scala implicits which you think may not exist in my proposal? (Ignoring the type stuff). Implicits otherwise seem like a much more complicated means to achieve the same thing.
It might be worth investigating the problems Scala users face using implicits, of course. Some of the problems may translate to this proposal.
There does seem to be quite a bit of overlap between the two ideas.
Building type classes is one of the main ways implicits are used in scala; and as you say comes with a fair amount of complexity.
But it is also possible to use implicts without creating type classes. As a prototyping idea - you could mark all parameters as implicit - and then have the reduced boilerplate on the function invocation; and it would also be a way to explore how often implicit resolution issues start coming up (obv that would be limited to Scala's resolution rules; but could still be an interesting worked example)
Scala 3 is also adding "implicit functions" - so as you say; could be a area to draw "lessons from the trenches" and possibly even some extra inspiration from as well.
https://www.scala-lang.org/blog/2016/12/07/implicit-function-types.html
It does indeed sound a little like implicit arguments. As well as Scala you could compare with Agda and Idris.
I've come to think of implicitness as being "term inference" by analogy with type inference.
All the languages I know of which have implicit arguments require both the argument positions and the corresponding definitions to be annotated in some way . This has been bothering me for a while because it seems to confuse inference/elaboration with type signatures, and I've been wondering how a language might look if these annotations were taken out of signatures (everything is just an argument) and either indicated somewhere else, or alternatively the compiler just tries to infer as much as it possibly can given all in-scope definitions ... you could approximate this is scala by having a bunch of definitions where everything is marked as implicit.
I'll add to the others that this gives me dynamic scope vibes, though possibly with a dash of unification if you choose logic-programming semantics for filling in the missing values through inference?
Implicits effectively do encode logic programming semantics. In Agda and Idris that's an explicit aim (term inference is basically proof search). Scala is the same but somewhat accidentally.
Interesting! I need to think more about the possible similarity to logic programming (which is already inspiring me in other ways 🙂)
Brent @Miles Sabin Ok, so I've investigated implicits in Scala 3, and the related features in other languages (traits, protocols...) including Coq and Agda. All of them have one thing in common: they use the type system to deduce which arguments should be passed implicitly. Most languages do this in a very messy way: they look at all the implicits/traits/protocols in the current module, submodule, imported modules etc., and the compiler picks one of those. This is a static selection strategy. In my proposal, the implicit is provided on the stack like any other variable. There's no fancy "resolution algorithm" involved, and the implicit value can be constructed dynamically, which is a whole different kettle of fish.
Now, Scala 3 offers something similar: there's a command to specify a "given Int" or a "given Bool" on the stack that should be used as an implicit argument. The problem with this is that you can only declare one implicit value per type (in a given scope), whereas my proposal doesn't care about types at all: it uses unique parameter names instead. Also, Scala's "given" declarations can't shadow (i.e. override) each other, whereas variables obviously can. This allows you to implement the equivalent of "default arguments", and override them on specific invocations.
In conclusion: I think I've proposed a simpler, more general, and more intuitive approach to implicits than all the other manifestations I've seen.
nick. i regret to inform you that this is exactly how rebol and infra work
If by exactly, you mean dynamic scoping, we discussed that higher up in the thread.
Though I do remember Rebol's approach to scoping is unique, so now I'd better revisit it, you're right.
nope! dynamic scoping looks up symbols based on where they’re called from, right? infra and rebol let you do whatever you want with your substitution semantics. could be “look up lexically” or “look up the call stack” or “evaluate to a UUID and consult the editor for what the last-clicked on object was”. so i guess that the part that’s exact is the “absence of a parameter list” part, and the part where you’re performing arbitrary substitutions based on a set of rules from user input. they also work recursively, too
for the record: i think it’s an extremely good idea. oh and the relation between uuids and functions in the same package sounds like the path infra would take—your modules have a structure, and you’re reflecting that structure via encoding it in otherwise arbitrary data. idk, smells like it’s almost a binary format to me 😉
Yes I believe under dynamic scoping you resolve all identifiers when a function is invoked, but I'm aiming for something in-between (which I'm not sure I've clearly described yet).
My proposal has nothing to do with a binary format: users of the language won't give a damn about how a program or process is stored in RAM or disk, and the language won't expose it 🙂 (Unless they're profiling performance.)
A program definitely won't be serialized as ASCII, but that an implementation detail.
Ok, conclusion on Rebol: it's not the same as my plan. In particular, the following code prints "2" in Rebol (and in languages with dynamic scoping), but I mentioned earlier that I want to preserve lexical/syntactic reasoning, so I'd have it print "1" instead:
x: 1
foo: [ print x ]
x: 2
do foo
foo
would only look for x
on the stack if it wasn't found at the definition site. So if you delete the first line, foo
would print "2". This brings you closer to Scala's implicits than dynamic scoping, but it's still different, as I described a few posts up.
Having the source of the variable definition change based on the context of the function definition makes me nervous. It's the same feeling as accidentally declaring a new variable when trying to assign to an existing one (almost a dual problem). That's the kind of thing for which I prefer an error message (at whichever phase) rather than unexpected runtime behavior. For a text language, I would probably nope out on seeing that in the docs. I guess it's less of a problem if all your variables are secretly UUIDs, though. For text, I would be happy with a declaration at the function level that a particular var always comes from dynamic scope.
Yes, I agree it would be a mess for a textual language to try this. But in a structure-based language, I think it will work quite nicely, because you're never going to accidentally shadow a variable through naming coincidences. For every variable assignment you write, you're going to know what that variable means, and the places in your program that are able to "receive" it.
(Side question: is there an existing name for "non-text-string-based" languages? I'm using words like "structure" willy nilly.)
Question about how compilers work
Is it possible to generate example source code from an AST node and its corresponding lexer rules?
Eg. babel for js parses the modern js into ast, modifies the ast, and then writes back to compatible js source code.
oh, yes. that makes sense. can you think of a project whose source code is easier to study?
You can quite easily write your own babel plugins, that could eg. replace strings and numbers with random values.
Would be neat to look through multiple asts containing eg. the same function, and extrapolating how you may use it from those examples, ie. similar to how some NLP algos relate words with nearby words, giving it the ability to generate sentences that sound sound, but with the added benefit of the ast; that the code will parse correctly.
yes, that would be nicer since, in that case, the code is actually more meaningful
Was recently thinking about something similar; how you could autogen UI from data, given meta info of eg. how often + in what context a user would like to access it, and partly what components goes together with each other and what kind of data.
the use case I have in mind is a quick reference for a language. this would serve as an easy reminder for the syntax of a specific AST node
Extremely useful to not have to visit eg http://fuckingblocksyntax.com every other minute XD Usually much easier to "get it" by looking at a bunch of closely similar examples, than reading long syntax definitions; at least initially to get going.
btw, your idea about autogening UI also crossed my mind. to generate a UI that would make you enter a valid AST node
https://svelte.dev/ describes itself as "a compile step that happens when you build your app" ... and is all about generating UI code; so perhaps that's somewhat relevant to your generating a UI ideas?
🔗 Svelte
It is :) Looking to go one step deeper with a custom realtime offline first decentralized graph database with binary data streaming using wasm with rust, builtin, created from a structural declarative language, supporting new UI paradigm(s) (infinite non-euclidean zoom + scroll canvas/space with fluid structure, vs the status quo of plain documents) (see webgpu), while being fully editable and having powerful action history for free, etc etc
I don’t know enough about svelte to tell
I realize that what I’m thinking about is basically a modal editor. I’ll have to think if what I have in mind is actually practical
IDK if you are looking for the more "formal" side of this but,
I'm reminded of these projects that I had at the bottom of my bookmarks:
https://baturin.org/tools/bnfgen/
https://lcamtuf.coredump.cx/afl/
This kind of technique is used in fuzz testing quite a bit.
If you think about formal FSM/Automata theory, the normal way we use a FSM is to feed a string and see if it gets the end state (that's what a parser does).
You can also run a FSM in reverse to generate random strings from the grammar. The algorithm is usually something like: Start at the starting state, pick a random node, spit out the string that would match that node, pick the next random node, continue until you get to the finish state.
Typically you have a way to backtrack out if you end up in an invalid node. Invalid nodes represent parse errors, so you would fail with an error if you were just reading some user input, but since you are generating your own string, the program can rewind out of the bad state.
thanks, Ray. I’ll look into them. I’ve heard of Automata Theory, but I don’t know more about it
does anyone know of a knowledge management tool (Tiddlywiki, Roam) that allows you make linkable identifiers in code blocks?
for example, if I have this code
data Path a = Path Bool [PathSegment a] a
I want to make PathSegment
linkable and have its definition in another tiddler/block/entry, etc.
We're experimenting with linking to/from code segments in NextJournal, but it'll be months before we have anything polished enough to show off.
The first place I heard of such a feature was Charles Simonyi's "Intentional Domain Workbench". Here's a video about that, and he talks about identity apart from naming at about 45min in:
I think the easiest option for me right now will be to implement a tiddlywiki plugin
I just remembered that I made this thing long ago, and it seems to have links in code fragments: http://akkartik.name/countPaths.html
didn’t github add this feature for some languages using some incremental parser they open sourced? can’t remember off the top of my head
the general idea is that I’d like a knowledge management system that would allow me to link to things from code
most likely I need to write a plugin that looks for link markers in a code block as a pre-processing step
Has anyone stumbled upon interesting research around visual statecharts and state machines. Trying to come up with a really intuitive visual that captures the total power of something like x-state (fancy stuff like parallel state machines, recorded history on state machines....)
I'm not aware of any but I'd be willing to help drum up some research. As a nerdy poet without the imagination required to be a mathematician (look up David Hilbert's quote), I feel helping derive visuals of complex subjects may be the only contribution I can bring to the science that I love. Creating something like this could be the single most important contribution in the formation of a benevolent general artificial intelligence.
If you're free to chat a bit, I'm happy to jump on a call and show you what I'm stuck on and why
I’ve done a lot of practical work with statecharts.
I lean towards cleaving paradigms apart - e.g. statecharts = (1) concurrency and (2) hierarchical state machines.
I compile diagrams to code and am working on a description of the techniques.
Interesting, related, stuff: FBP (Flow-Based Programming), Drakon, PEG parsing, Full Metal Jacket.
My reading of Harel’s StateCharts paper is online.
(For lots of examples of visual programming, including some tools specifically focussed on state space visualization (including the aforementioned Drakon and Full Metal Jacket), see the https://github.com/ivanreese/visual-programming-codex/blob/master/implementations.md page in my https://github.com/ivanreese/visual-programming-codex/)
I'm gonna shoot a video of my problem and send it over in a bit to give you fine folks a sense of the magnitude of the challenge I'm dealing with
I almost asked this same question! I ran into the concept of "behavior trees" while looking for stuff about HFSMs, and thought they looked interesting (BTs start on slide 12): https://web.stanford.edu/class/cs123/lectures/CS123_lec08_HFSM_BT.pdf They apparently originated in video game AI, but in the context of these lecture notes are meant for robot AI. They struck me as an intuitive and powerful state machine formalism, almost a general-purpose code formalism.
Anyway, I would love to see field accounts of these ideas, especially if put in the hands of ostensible non-programmers.
Why don't we have a group brainstorm session if we still need to explore it?
So we're using state diagrams in a peculiar way which is likely different. We kinda took the concept and added a ton of bells and whistles to it and made the state diagram the backbone of defining the behaviours of your backend. The state diagram is hyper away of the rest of our abstraction (our pre-built auth system, our permissioning system) and that's a big part of how we decrease the surface area of code.
I don't usually do this but I figured shooting a real raw video might work. I tried to keep it short (8 min) and I go through my entire problem. Turns out, I actually had three of em. Apologies if this thread is now in the wrong place. Sounds like thinking together is where it now belongs.
Have you read Harel’s original paper? It has visuals for both problem 2 and 3 in your video.
This might sound glib, but I'm 100% serious. The best solution I've seen to problem #1 (how do you get people past the initial unfamiliarity / reluctance?) is to make your GUI look enviably cool. Make the GUI just explode charm, style, je ne sais quoi, mystery. Make it draw people in. Make it look like nothing they've ever seen before. Make it elicit and reward curiosity. Don't allow people to grossly assume that they know what this GUI is, what it does, and that it's not worth the 20 minutes of attention to learn how it really works.
This is a very hard problem, but it's not an NP hard problem. It's do-able, and people do it all the time.
(I answered a slight variation of your question, because the other half of the question — how do you make the GUI self-revealing to new users — is something it sounds like you already have a good idea of how to do. Click targets, tooltips, progressive disclosure... typical UI design jazz.)
^ great feedback. I should have added one crazy caveat, we have a "no progressive disclosures" rule right now which makes every design challenge infinitely harder. The reason for this is we think there's going to be some weening required to get devs acclimated to even the slightest GUI mostly cause of legacy rhetoric. So the idea behind our UI is that it has to be entirely keyboard navigatable. So we figured that out on storage with great difficulty. On the behaviour page (the statechart) obviously it's impossible. But we're still holding out in hopes that we'll come up with a way to structure this diagram to make it more auto-layout and then, hopefully intuitively keyboard navigatable.
There aren't that many great ways to do a progressive disclosure and still keep the hand away from the mouse.
^ I know this goal might sound insane, but I'd rather exhaust all other options and only then introduce progressive disclosures just to keep the principle alive
maybe the keyboard navigatable state diagram is the enviably cool UI you're talking about
You might consider more of a railroad-diagram look (or at least organizing principle). Maybe oriented vertically rather than horizonal like the common examples (it will be more compact and look more like code that way for skeptical devs). I agree that auto-layout is really important for something like this, as I never ever want to fiddle with box positions.
I seriously doubt that xstate is the "state of the art" of statecharts. I bet you'll probably find more inspiration looking at Yakindu that is a really polished (alas, non-free) implementation of statecharts (including https://www.itemis.com/en/yakindu/state-machine/documentation/user-guide/quick_ref_orthogonality#quick_ref_orthogonality).
When it comes to GUI, I like the https://github.com/enso-org/enso/tree/main/docs/syntax: hybrid/bidirectional visual and textual syntax. There are lots of projects like http://smc.sourceforge.net/ that define a special DSL. I imagine I'd spend most of my time on the textual mode if I had that, and then use the visual one to browse. A good textual DSL with a cool browsable visual representation (perhaps even retaining just some interactive editable elements) seems like an easier sell than a "1000-clicks-no-code-ui", but that's just me 🙂 .
Good luck with your project! Looks pretty cool.
Ah, forgot to mention http://mbeddr.com/. It is a language with a projectional editor implemented in MPS that has https://vimeo.com/78412221. So an example of a mostly textual UI with some cool visual elements on top.
Hard agree with Ivan Reese on the value of aesthetics here. See: Don Norman's book on Emotional Design for a longer treatment, with a brief preview of the material here:
https://www.interaction-design.org/literature/topics/emotional-design
(I failed to add a link to my reading of the StateCharts paper: https://guitarvydas.github.io/2020/12/09/StateCharts.html) (HTH).
That link at the bottom is gold. Very hard to read Harel's original paper given how the schematics are laid out. Thanks for sharing
@Mo the loom video 404’s now, but i’m extremely interested in this, especially the keyboard-navigability part. one of the things i harp on a lot is that you can in fact design a set of ui primitives that match the primitives of the algebra for the data you’re working with (vim pretty much does this for text, although the actual commands tend to be obscure and not right for editing most code)
I'm playing around with the interface for an NLP recognition framework at Storyscript (in Rust), and was wondering if anyone has seen this kind of design executed in any other contexts
This is essentially parsing where a given string can have multiple parses, yes?
Will Crichton, maybe. I've experienced mostly models that all operate on text to tags (spans) or in other places, tokens to tags.
I'm trying to understand if the process of incrementally building up layered spans with attributes has a name. I think this is closer to something like a parser combinator that can produce multiple results and fold over itself
So, yes. I suppose it's parsing with multiple parsed outcomes. But, the interface to building this parser for extensibility is what interests me.
This sounds quite similar to the multiple (logical) layers of annotations provided by modern NLP frameworks (e.g. https://stanfordnlp.github.io/CoreNLP/, https://spacy.io/). The natural language "parsing" there is mostly probabilistic so you can access multiple interpretations of a token span if you want to. Notably, these interpretations don't really cascade / explode up the logical layers, but it may be something to look at
I've built some visualizations of exactly this at my last startup. Also, a left-to-right, boxes-and-arrows multiple path depiction of the semantic understanding of the parsed text in question to help domain experts add assertions to the system to improve semantics.
Now that sounds interesting and closely related to what I'm working on right now. Can you share any more details? Would love to see that (and know how it works, heh).
the format im familiar with for this kind of work is called stand-off annotation. the codex editor project is based on it
@Florian Cäsar This is a screenshot of the NLP development interface I built for our automotive domain experts. They would put some text from an automotive manual into the tool and look at how the text was understood by the system. There's some internal jargon here: "SIT" is a category for "service information" that most often captures the verb (in this case "clean"), while the "SSC" category captures nouns. The tabular section shows the initial matches in our taxonomy for various terms. The green oval ("SumpFiller") in the graph on the bottom is the part identification, while the rest of the graph shows how the terms were disambiguated using our knowledge graph.
Woah, that's awesome, thanks Jack Rusher. I have so many questions.. there is very little information I could find on how systems like this work in the wild. What was this built in & on? How did you structure the knowledge graph? How did you link against it? .. etc.
But of course, you may not be able or willing to share those details, so no worries if that's the case 🙂