As discussed in this thread a neural network that is stored in a smart contract could be potentially fooled by a purposefully constructed malicious data ( although supplying purposefully constructed data may not be feasible. For instance if the connection between a camera and the blockchain is secure, the image can not be modified by the attacker)
A question is whether malicious data could be filtered out by applying a set of multiple networks with potentially different parameters.
Another interesting possibility is to generate lots of malicious data samples and then to have a dedicated network that detects malicious data samples
I think what we know that a human brain can not be easily fulled. One can not create a picture of a dog that looks like a cat. There is some “anti-fooling” mechanism in the human brain