Microsoft on Tuesday rolled out two new technologies aimed at identifying and combatting the influence of manipulated media.
One, the Microsoft Video Authenticator, can analyze photos and videos and give a percentage chance that they have been artificially changed, the company said in a blog post.
The other includes technology helping creators add digital hashes and certificates to content made in Microsoft as well as a reader that can check those hashes and certificates.
Ideally, those two pieces would help users know the creator of content and be able to verify it has not been changed.
Microsoft also announced Tuesday that it is partnering with the nonprofit AI Foundation to make the new video authentication software available to news outlets and political campaigns.
The software company is also partnering with the BCC, CBC and The New York Times on a new project to test its authentication technology.
Lawmakers and experts have raised alarms about the emergence of manipulated media, especially when forged using artificial intelligence and machine learning, as a tool for spreading political misinformation.
The technology has not advanced far enough for forged videos to be indistinguishable from real ones, but even imperfect edits can be damaging.
Simpler edits — like the slowing down or selective cropping — are also being deployed by political campaigns to score points on social media.
Theoretically, wider use of digital hashes could address those fakes.