While makers of mixed reality headsets move closer each year to convincingly merging the real and digital worlds as users see and hear them, input — the ability to interact within VR or AR environments — remains challenging, as controllers are still necessary for most interactions. This week, Arizona State University researchers demonstrated a potential alternative called FMKit, enabling headsets to precisely track individual finger motions, as well as recognizing in-air handwriting.

ASU’s work goes beyond the hand tracking seen in Leap Motion accessories and Oculus Quest VR headsets, enabling an individual finger’s path to be recorded in 3D space and compared against four data sets of handwriting samples. Fingertip writing could be used to identify individual users, securely authenticate users by password, and create text input as an alternative to typing, speaking, or selecting words with a handheld controller.

Beyond the system’s value as a way to turn air-written English or Chinese words into text — a feature that the researchers are focusing on — the potential business applications are exciting. A distinctive signature could be drawn in the air to unlock a secured XR headset or individually secured app, enabling companies to highly personalize the protection of digital content. Alternately, companies could let teams share a common passcode system that goes beyond just numbers or letters, recognizing symbols such as five-pointed stars or other distinctive markings.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

FMKit currently supports two types of input devices — a Leap Motion controller that works at 110 scans per second, and a custom inertia-measuring data glove that works at 50 scans per second — with Python modules to gather, preprocess, and visualize scanned signals. As a user identification system, FMKit achieves over 93% accuracy with Leap Motion and nearly 96% accuracy with the glove. But for handwriting recognition, Leap Motion’s results are better, and the system at best identifies words accurately 87.4% of the time. That’s not enough to replace voice input for dictation, but it’s a good start for a system that can be used with nothing more than a finger and head-mounted sensor.

ASU’s Duo Lu, Linzhen Luo, Dijiang Huang, and Yezhou Yang have posted FMKit’s source code on GitHub as an open source project including the library and datasets, in hopes that other researchers will extend their work. The authors are presenting their research this week as part of the CVPR 2020 Workshop on Computer Vision for Augmented and Virtual Reality, and a sample video is available here.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here