Magic stickers make humans "invisible" intelligent monitoring crisis coming? 【full text】

Security exhibition network technology dynamic camera using AI to recognize faces and bodies in images and videos is becoming more and more common, ranging from supermarkets and offices to autonomous driving and smart cities, which can quickly capture human bodies and recognize faces Smart cameras are becoming ubiquitous.
However, a group of research teams designed a special color pattern. As long as you hang this 40cmx40cm magic sticker on your body, you can avoid the surveillance of the AI ​​camera.
This team is from the University of Leuven, Belgium (Katholieke Universiteit Leuven), they published a paper called "Fooling automated surveillance cameras: adversarial patches to attack person detection".
The three researchers Simen Thys, Wiebe Van Ranst, and Toon Goedeme who signed on the paper demonstrated using the popular YOLOv2 open source object recognition detector. They successfully deceived the detector by using some techniques.
They have published the source code in the paper.
1. Artifact sticker like "invisible cloak"
Let ’s take a look at what the research team has done.


As shown in the figure, a colorful sticker is hung on the person on the right. This sticker successfully deceived the AI ​​system, so that even if he is facing the camera, he is not detected by the AI ​​system like the person on the left (pink frame).
The person on the right reversed the sticker and was immediately detected.
After the person on the right handed the sticker to the person on the left, the AI ​​could not immediately detect the person on the left.
The researchers point out that the technology can be used to "maliciously bypass surveillance systems", allowing intruders to "sneak into the surveillance camera by holding a small cardboard in front of their bodies (without being discovered)."
According to foreign media reports, Van Ranst, one of the authors of the paper, revealed that it should not be too difficult to solve this problem with an existing video surveillance system. "At present we still need to know which detector is in use. What we want to do in the future is to generate a patch that applies to multiple detectors at the same time," "If this works, then the patch may also be used for the detectors used in the monitoring system. kick in."
The team is now planning to apply the patch to clothing.
Like the episode described by William Gibson's science fiction novel Zero History, the researchers wrote: "We believe that if we combine this technique with sophisticated clothing simulation, we can Designed a T-shirt print that can automatically monitor the camera that is almost 'invisible' this touching. "
In the future, their work will focus on making patches more robust and migratable because they are not well suited for different detection architectures, such as Faster R-CNN.
2. How is the "antagonism patch" made?
The core purpose of this research is to create a system that can generate printable adversarial patches for "fooling" human detectors.
The researchers wrote: "We achieved this goal by optimizing the image to reduce the different probabilities associated with the appearance of the characters in the detector output. In our experiments, we compared different methods and found that the loss of miniaturized objects created Effective patch. "
Then they print out the optimized patches and test them by shooting the people who hold them.
The researchers found that as long as the location is correct, the patch will work well.
"Based on our results, we can see that our system can significantly reduce the accuracy of the human detector ... In most cases, our patch can successfully hide people from the detector. In this case, it is not the case Next, the patch is not aligned with the human center. "The researchers said.
The goal of the optimizer is to minimize the total loss function L. The specific optimization goals include three loss functions: Lnps (non-printability score), Ltv (total image change), and Lobj (large object score in the image).
Lnps represents the extent to which the colors in the stickers can be printed by ordinary printers;
Ltv ensures that the optimizer supports images with smooth color transitions and prevents image noise;
Lobj is used to minimize the target or category scores output by the detector.
The total loss function can be obtained by adding the above three loss functions.
The YOLOv2 detector outputs a grid of cells. Each cell contains a series of anchor points. Each anchor point contains the position of the bounding box, object probability, and category score.
In order to make the detector ignore people in the image, the researchers used the MS COCO dataset for training and tried three different methods: minimizing the classification score of the humanoid (Figure 4d), minimizing the object score (Figure 4c), or A combination of the two (Figures 4b and 4a).
Among them, one method may cause the generated patch to be detected as another class of the COCO dataset. The second method does not have this problem, but the specificity of the generated sticker for a certain class is lower than other methods.
After conducting experiments on various "patches", the researchers found that the photos of random objects after multiple image processing are better. They tried a variety of random transformations, including image rotation, random enlargement and reduction, and random addition of random noise. , Randomly modify the correct rate and contrast and other processing.
In the end, the researchers put the obtained patches together with NOISE (randomly added noise) and CLEAN (no patch baseline) on the Inria test set for evaluation, focusing on evaluating how many alarms these patches can avoid.
The results show that the number of alerts triggered by the OBJ patch is low (25.53%).
The first line has no patches, the second line uses random patches, and the third line uses ideal patches. The effect is very obvious. In most cases, ideal patches can successfully avoid humans from AI detection.
However, this patch is not correct. If the effect is not good, it may be because it is not aligned with the person.
3. Counterattack is not new
With the rapid development of AI, multiple security segments such as video surveillance, autonomous driving, drones and robots, voice recognition, and text screening have all involved more and more security issues.
The growing deep learning network has been found to be extremely susceptible to counterattacks.
For example, adding a little elaborate interference to the original image may cause a classification model to make a wrong judgment, causing it to mark the modified image as a completely different object.
Based on this background, more and more researchers are interested in countering attacks in the real world.
In November 2016, researchers from Carnegie Mellon University and the University of North Carolina showed how to use a pair of printed eyeglass frames to defeat facial recognition systems.
The researchers said: "When the image is provided to an attacker of a country's advanced facial recognition algorithm, the glasses let him escape being recognized or impersonating others."
It is said that the recognition error rate after wearing the frame is as high as 100%. The bad guy can use this recognition vulnerability to avoid the detection of the software, and then easily pretend to be someone else, and successfully complete the device unlock with facial recognition.
In October 2017, Su Jiawei and others from Kyushu University in Japan proposed a black box DNN attack, which uses differential evolution to disturb a few pixels (only 1, 3, and 5 pixels out of 1024 pixels). Modifying a few pixels in the picture can make the deep neural network completely judge the error.
In 2018, the Berkeley Artificial Intelligence Research Laboratory (BAIR) demonstrated a physical confrontation sample for the YOLO detector. It also used the form of sticker disturbance and pasted some carefully designed stickers on the "STOP" road sign.
As a result, the YOLO network is unable to perceive and recognize the "stop" sign in almost all frames. Similarly, the entity confrontation samples generated for the YOLO detector can also deceive the standard Faster-RCNN.
We can imagine that if a self-driving car driving on a road sees a road sign with such a sticker attached, once it is misled by the sticker and does not understand the road sign, it is possible to have an irreparable traffic tragedy.
Conclusion: The ideal defense strategy is still being explored
For a long time, countering attacks has been an interesting and very important subject in the field of machine learning.
Nowadays, AI is gradually applied to daily surveillance cameras and software in large areas, and it has appeared in many scenarios such as retail, work space, community, transportation and so on.
The anti-samples may drill holes in neural networks, such as allowing some thieves to avoid surveillance cameras to steal things in unmanned stores, or allowing intruders to successfully enter a building.
At present, researchers are still far from finding an ideal defense strategy against these adversarial samples. We might as well look forward to breakthrough progress in this exciting field of research in the near future.

Electric Drill

Ebic Tools Limited , https://www.ebictools.com