Deepfake detection tool unveiled by Microsoft

Microsoft has developed a tool to spot deepfakes - computer-manipulated images in which one person's likeness has been used to replace that of another.

The software analyses photos and videos to give a confidence score about whether the material is likely to have been artificially created.

The firm says it hopes the tech will help "combat disinformation".

One expert has said it risks becoming quickly outdated because of the pace at which deepfake tech is advancing.

To address this, Microsoft has also announced a separate system to help content producers add hidden code to their footage so any subsequent changes can be easily flagged.

Deepfakes came to prominence in early 2018 after a developer adapted cutting-edge artificial intelligence techniques to create software that swapped one person's face for another.

The process worked by feeding a computer lots of still images of one person and video footage of another. Software then used this to generate a new video featuring the former's face in the place of the latter's, with matching expressions, lip-synch and other movements.

Since then, the process has been simplified - opening it up to more users - and now requires fewer photos to work.

Some apps exist that require only a single selfie to substitute a film star's face for that of the user within clips from Hollywood movies.

But there are concerns the process can also be abused to create misleading clips, in which a prominent figure is made to say or act in a way that never happened, for political or other gain.

Early this year, Facebook banned deepfakes that might mislead users into thinking a subject had said something they had not. Twitter and TikTok later followed with similar rules of their own.

Microsoft's Video Authenticator tool works by trying to detect giveaway signs that an image has been artificially generated, which might be invisible to the human eye.