At the end of 2017, a user with ID “deepfakes” appeared on the online forum and released an artificial intelligence algorithm to replace video faces through machine learning, kicking the door of artificial intelligence face change. However, at that time, the threshold for the use of this technology was still relatively high, requiring operations such as compiling code.
A month later, someone transformed his open algorithm and launched a simple version of artificial intelligence face changing tool “fakeapp”. This time, even ordinary users can operate smoothly.
With the upgrading of face changing technology and the open source of related applications, the use of face changing has gradually evolved from the initial entertainment to a criminal tool, which has aroused more and more concerns about face changing in artificial intelligence.
The first is a severe challenge to the authenticity of information. After the invention of PS, there is no truth with pictures; The appearance of artificial intelligence face changing technology makes the image become a mirage: for the Internet with false news, it will undoubtedly cause further trust collapse.
Secondly, this will greatly increase the possibility of violating the right of portrait. No one wants their face to appear in inexplicable videos. Previously, there were reports on an adult video website that a actress’s face was “installed” on the face of an adult video actress, which will have a great negative impact on the actress’s reputation.
In view of the ethical problems and potential threats brought by this technology, advanced deepfake detection technology will be very important.
In previous studies, deepfake video detection mainly focused on how to better detect deepfake images or faces with strong supervised annotation.
Now, a research jointly completed by Ali security Turing laboratory and the Institute of computing of the Chinese Academy of Sciences pays more attention to the widespread problems in reality: some attack (tampering) videos, that is, only some faces in the videos have been tampered with.
Specifically, this study proposes a deepfake detection framework based on multi instance learning, which detects faces and input videos as instances and packets in multiple instance learning (MIL).
However, the gradient vanishing problem exists in traditional multi instance learning. Therefore, researchers propose sharp mil (s-mil), which advances the aggregation of multiple instances from the output layer to the feature layer. On the one hand, it makes the aggregation more flexible. On the other hand, it also uses the objective function of forgery detection to directly guide the learning of instance level deep representation, so as to alleviate the problem of gradient disappearance faced by traditional multi instance learning. This study proves that s-mil can alleviate the gradient disappearance problem of traditional mil.
The researchers said that in addition to some face change detection tasks, the research results also have important enlightening significance for the general research of video multi instance learning and annotation technology, and the artificial intelligence face change technology and its detection technology also deserve our continuous attention.