Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

URL has been copied successfully!
The deepfake threat just got a little more personal
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

The deepfake threat just got a little more personal

A two-hour conversation with an AI model is enough to create a fairly accurate image of a real person’s personality, according to researchers from Google and Stanford University.As part of a recent study, the researchers were able to generate “simulation agents”, essentially AI replicas, of 1,052 people based on two-hour interviews with each participant. These interviews, based on an interview protocol developed by the American Voices Project, which explores a range of topics of interest to social scientists, including life stories and views on current societal issues, were used to train a generative AI model designed to mimic human behavior. To then evaluate the accuracy of the AI “‹”‹replicas, each participant completed two rounds of personality tests, social surveys, and logic games. When the AI “‹”‹replicas completed the same tests, their results matched the answers of their human counterparts with 85% accuracy. When answering personality questionnaires, the AI “‹”‹clones’ responses differed little from their human counterparts. They performed particularly well when it came to reproducing answers to personality questionnaires and determining social attitudes. But they were less accurate when it came to predicting behavior in interactive games that involved economic decisions.   The impetus for the development of the simulation agents was the possibility of using them to conduct studies that would be expensive, impractical, or unethical with real human subjects, the scientists explain. For example, the AI “‹”‹models could help to evaluate the effectiveness of public health measures or better understand reactions to product launches. Even modeling reactions to important social events would be conceivable, according to the researchers.  “General-purpose simulation of human attitudes and behavior”, where each simulated person can engage across a range of social, political, or informational contexts”, could enable a laboratory for researchers to test a broad set of interventions and theories,” the researchers write.However, the scientists also acknowledge that the technology could be misused. For example, the simulation agents could be used to deceive other people online with deepfake attacks. Security experts already see deepfake technology rapidly advancing and believe it’s a matter of time before cybercriminals find a business model they can use against companies.Many executives have already said their companies have been targeted with deepfake scams of late, in particular around targeting financial data. Security company Exabeam recently discussed an incident in which a deepfake was used as part of a job interview in conjunction with the rising North Korean fake IT worker scam.The Google and Stanford researchers propose the creation of an “agent bank” of the 1,000-plus simulation agents they have generated. The bank, hosted at Stanford University, would “provide controlled research-only API access to agent behaviors,” according to the researchers.While the research does not expressly advance any capabilities for the creation of deepfakes, it does show what is fast becoming possible in terms of creating simulated human personalities in advanced research today.

First seen on csoonline.com

Jump to article: www.csoonline.com/article/3633099/the-deepfake-threat-just-got-a-little-more-personal.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link