GPT's "long-term memory" allows prompt injections to become permanent
Facepalm: "The code is TrustNoAI." This is a phrase that a white hat hacker recently used while demonstrating how he could exploit ChatGPT to steal anyone's data. So, it might be a code we should all adopt. He discovered a way hackers could use the LLM's persistent memory to exfiltrate data from any user continuously.
In brief: Makers of AI PCs and processors will be happy with the news that people are buying these products, which feature NPUs designed to accelerate AI tasks. The less welcome caveat is that according to a new report, many people are purchasing them simply because they want or need a new PC – not because of the AI features.