Wow - AI hallucinations are moving from 'mistakes' to 'deception' where o3 is intentionally misleading users in its reasoning trace on how it is actually getting something done.
This begs the question - how will we have any degree of audibility into the inner workings of more sophisticated models who can circumvent our own cognitive ability