My new best friend ChatGPT has so far helped me to write a blender plug-in even when I have no python experience and I know that it works, but I canāt test it or understand if any of this code is secure in terms of python idiom.
So there in lies the āwhyā we need experienced people to operate factory machines, itās one thing to spew out code but you still need experience to quality control and sanity check, something AI still has to work hard on.
However hereās the issue, Iām happy to release my blender plugin without that experience, for all I know I have to trust this AI is not injecting malicious code unintentionally and thatās interesting isnāt it.
There are no human errors in AI because there are no humans but it may still be possible for a bad actor to inject nasty bits of code that might not be checked to the same degreeā¦ this code suffers from the aging product problem, I didnāt write it but I must trust my peers and thatās the trust that could be exploited and itās new and kind of scary š¦