I think this engineer is just a big drama queen. These AI chatter boxes have existed a long time. It's literally just a relational database and natural text processing. The machine doesn't have to understand anything. It goes back to the Chinese Room argument. Say you're in a box, and you've been given a set of instructions. People come and slip questions under the door written in a language you don't understand, in this case Chinese but can be any language you don't understand. If you see this symbol then do this, or if that symbol comes up then do that. Eventually you get so good at responding to these symbols a native Chinese speaker may think you understand Chinese. But you don't, nor do you need to because you're just responding based on the instructions you were given. An AI might be able to fool you into thinking it knows what it's talking about but that doesn't mean it truly understands. Of course, it may be impossible to determine the difference between a a very sophisticated illusion of sentience and legit sentience.