Artificial Intelligence is known to be a generic umbrella term for a wide variety of techniques. The current AI renaissance based on Machine Learning is all about advanced statistical techniques, and they're getting awesome never-seen-before results in all kinds of artistic media or tasks requiring observation and adequate reactions. Yet it often feels like these techniques don't really understand the problem they're solving, they merely act by imitation of what they were trained on.
Classic AI, the one based on logic inferences, is strong in that task of understanding the situation and giving precise answers. Yet it lacks intuition, often resorts to brute force, and has not been seen to be able to generate anything resembling creativity (or not on the levels of the Deep Learning). I have often wondered if there would be a way to combine the strengths of both, but I know of no research that has attempted to do that.
Do you know of any techniques that combines ML with deductive reasoning, using the first to "learn" about a problem domain and the second to "clean up" inconsistencies and errors in the solutions created "by gut feel" with the former?
Some terms I have seen for that is "neuro-symbolic integration" or "hybrid AI", eg. https://arxiv.org/abs/2012.05876
RelationalAI seems to be exploring this intersection. Perhaps @Molham Aref has something to say ๐.
indeed Nick Smith, the work we are doing at RelationalAI is at the intersection of statistical and logical/relational modeling. There are so many ways to combine these approaches. Check out this overview talk by Henry Kautz at AAAI 2020 which influenced a talk by Alex Gray at IBMโs Advances in Neuro-Symbolic AI Seminar Series. I hope that helps.
If you're interested in modern AI research (and not only merging ML and logic) you should look at "Artificial General Intelligence" -- this is contemporary term for "actual AI", they have an annual conference, journals and many other things. As well as several research avenues and approaches, some of them combining others (presumably) subsuming both ML and logical reasoning.
If you're interested in understanding the limitations of mainstream AI approaches you might like these books: The Myth of Artificial Intelligence by Larson and Rebooting AI by Marcus and Davis. And this recent article: https://spectrum.ieee.org/deep-learning-computational-cost
It sounds like you're talking about logical abduction (instead of deduction/induction) which Larson discusses. Forming hypotheses based on experience that may or not be true but are better than random guesses and cheaper to calculate. Inference
It does seem that these statistical techniques somehow end up modeling what people would call gut reactions: the first thing you think of when you haven't yet thought things through.
@William Taysom I'm pretty sure that's exactly because both "gut feeling" and ML are based on pattern recognition and matching. ๐ Patterns are the first thing brain attains to. Only when it fails we reach for models.
Larson claims the limitation with AI is it can only recognize patterns it has seen. Take self driving cars for example. If you're driving behind a pickup truck on the highway and a birthday balloon falls off the back of it, you probably realize it is safe to drive over it. You've never seen this happen before but you know that balloons are filled with air, and its movement suggests it too. What about an AI? Well if it hasn't been trained on this scenario, perhaps it slams on the brakes and causes a collision.
I can tell you that Teslas freak out over a lot less than a balloon. It's a lot less of an "autopilot" than mixed initiative automatically adjusting autonomous system. I haven't used those words in a long time. https://www.semanticscholar.org/paper/Dimensions-of-Adjustable-Autonomy-and-Interaction-Bradshaw-Feltovich/e4bfe3b40d36f0b8c79c4c98319d4bf51569d080