MIT video tool analyzes your video uploads, spots your lies

MIT CSAIL
(a) Four frames from the original video sequence. (b) The same four frames with the subject's pulse signal amplified.

Researchers at MIT have a shiny new video tool to share with you that lets you spot all sorts of minute motions that your eye might have otherwise missed. The open source video amplification algorithm allows people to upload their own clips to detect hidden details like the blood flow underneath their skin, or even help tell if someone is lying.

The underlying technology in MIT’s program is called Eulerian Video Magnification (EVM), and it tracks every pixel in the frame and exaggerates any changes it notices. This allows you to see the tinniest movements in a person’s eyes or the seemingly invisible color shifts in person’s face to let you visualize their pulse.

MIT originally developed the technology last year to measure the vital signs of newborn babies without making physical contact. Since then, though, the research team has partnered with Quanta Research Cambridge to turn it into a Web app so you could upload your own MP4- or WebM-formatted videos. The team has also released the entire source code online.

The potential uses for this software could range from use in lie detectors for law enforcement, structural integrity surveys by road crews, or to crack down on cheating gamblers. Meanwhile, the researchers also told The New York Times that they are looking into releasing a version of the software that works with mobile devices including smartphones and Google Glass.

Be sure to check out The New York Times for its interview with the scientists behind the Eulerian Video Magnification technology.

[Quanta Research Cambridge via Tested]

Get more GeekTech: Twitter - Facebook - RSS | Tip us off

To comment on this article and other TechHive content, visit our Facebook page or our Twitter feed.
Related:
Shop Tech Products at Amazon
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.