Why We Need to Reframe Our Thinking on the AI Solution // With Sevak Avakians

Author Avatar
Written by Pete Sena,
• 3 min read

Sevak Avakians: So part of the thinking is that there’s no way for us, as mortal humans, and we’re not all geniuses. (At least I certainly am not) so there’s no way for us to actually build intelligence into a machine. The approach you have to take, and IBM what they’ve done with Watson is actually incredible. And it’s the perfect technology for that time period that they’ve been developing because there was nothing else really out there. So they’ve moved that ball forward, right? But our approach is that it doesn’t matter how large your team is. You could have a Manhattan Project-like environment with the most brilliant people around, the Edison’s of our time. There’s no way you can actually build a machine that’s going to be as smart as a person. There’s too much complexity. Even just the project management of it would be incredible. Developers come and go, scientists come and go, and they take their knowledge with them and then there are holes and everything and something like that should beable to last many years. People are going to retire and that knowledge is gone. How are you going to manage all that?

Pete Sena: You can’t transfer consciousness.

Sevak Avakians: Correct (well hopefully sometime in the future) but the idea is rather than try to create that intelligence directly, you set up an environment where these components that should connect together in some fundamental ways allow that intelligence to emerge. So what we’ve done is we figured out that there’s a raw cognitive process that works regardless of the brain. Whether it’s a rat’s brain or a worm’s brain or a human’s brain, there’s this fundamental cognitive function that works for everyone. So we’ve we built that cognitive function into what we call a “cognitive processor” and then we allow that to connect with other cognitive processors and these things we call “manipulatives,” which are these atomic operators. And these build up these topologies that look like neural net topologies that you’re used to see, you know, a convolutional neural net. But they’re fundamentally different. The topologies have to look a lot like that because intelligence is hierarchical. You need to connect these different sensors and different inputs into a system so that it could start understanding its environment first, learning from it, and

Pete Sena: The ability to understand context, essentially.

Sevak Avakians: Exactly, without context there’s no way that the intelligence

Pete Sena: It’s lottery data. Wait, what?

Sevak Avakians: Yeah that won’t work, right? People don’t know that yet with the tools that they’ve been using. So we also had to develop another part of our platform which understands that too, that can show that, “no this is just random noise. You can’t use that kind of data to make predictions about the future, like the lottery numbers.”