Im having a hard time understanding how good these images are. How is wifi data able to determine color?
Here is another that generates images through walls: https://arxiv.org/html/2401.17417v2
I assume the colors are just due to hallucinations of their neural nets and training data sets being very similar to the validation sets.
This could really hurt their claims. If it’s over trained so much that it is guessing the color this well then who’s to say that random CSI packets wouldn’t still produce a decent image?
What I got from it is that it’s not guessing the color. It recreates what it saw on a colorful test sample when it had a similar input - a guy that walked around the room, creating interference. It can as well be a moving barrel of similar proportions and surface characteristics. Changing room geometry or adding another moving body at the same time means it should be retrained from the scratch. So the picture is a total aproximation and a crude one. But what can be done blind, without a camera while learning - is get trained on empty room and tell when there is someone there. What is significant though - if you have some way to timestamp wifi input with a sight from outside of a window, you can match these two and then tell where, aproximately, someone stays without having a good look on the scene.
Hmm that’s an interesting application.
Tinfoil suit, go!