Admit it, you’ve taken some in the . Don’t worry, we all have. Despite their popularity, however, we can all agree that selfies aren’t really a good look. Aside from the poor lighting, unflattering angles, and visible toothpaste tubes, you’re also stuck with a picture of you holding your up like a doofus. Front-facing cameras have helped improve selfies, but your arms probably aren’t long enough to capture anything but your face. Instead of resorting to selfie stick, Abhishek Singh has developed a neural network system for removing the smartphone from mirror selfies.

Like everyone else in today’s world, Singh has noticed how popular mirror selfies are. He also very astutely determined that it’s the smartphone in the picture that we all hate, not the mirror itself. He thought it would be fun to devise a way to erase the smartphone from photos, and actually came up with a way to do so. He accomplished that by using two different networks built on the Keras deep learning framework that uses Google’s TensorFlow machine learning platform as a back end.

The first neural network detects objects in photos or videos and then draws a boundary around them. In this case, it’s detecting the smartphone. Once the boundaries are found, a mask can be created to erase that part of the frame. That, of course, leaves a hole in the photo or video, which is where the second neural comes in. That neural has been trained on roughly 18,000 images of Singh without a phone in hand. It can use those images to intelligently fill in the part of the frame that the first neural network removed. As you can see in the video, it does a pretty good job of that — though it isn’t perfect and makes the phone look like a distortion in the image.

Source link


Please enter your comment!
Please enter your name here