On 19 January 2024, the Turkish Data Protection Authority (KVKK) adopted the guidelines on Deepfakes, outlining the threats posed by deepfake technology to personal data and providing guidance on detection and prevention. The note, applicable across all sectors, details the use of artificial intelligence techniques to manipulate personal data, creating realistic imitations of individuals' faces, movements, and voices. It highlights potential dangers such as financial damage, cyberbullying, and fraud and offers advice on recognising deepfake content and measures to counteract its threats. Detection methods of deepfakes include observing unnatural features such as eye movements, facial expressions, and smoothness in appearance. In order to address the threats, individuals are advised to be cautious about sharing personal data, raise awareness about deepfake threats, and utilise available tools for detection. Organisations are encouraged to manage network and security operations effectively, engage in public relations, and develop anti-deepfake software. Cybersecurity companies can contribute by developing tools to detect deepfake content, analysing deepfake videos, creating data blocks for reference, increasing user awareness, and developing defence methods against cyber attacks involving deepfake technology.
Original source