Innovations in Touchless Technology: A Conversation with Clay AIR
Written on
Chapter 1: The Evolution of Touchless Technology
Touchless technology has been around for quite some time, with its roots tracing back to the creation of remote-controlled devices. The first such invention, a remote-controlled race car, was developed by Elettronica Giocattoli in 1966. However, recent advancements in 3D environments and virtual reality—thanks to significant improvements in computing power—have opened up a plethora of new applications for this technology.
In 2016, Thomas Amilien co-founded Clay AIR with the goal of enhancing gesture recognition and hand tracking capabilities across various camera systems without compromising on performance or accuracy. Clay AIR empowers developers and hardware manufacturers to create more engaging and natural interactions with digital and virtual assets through computer vision. This allows users to interact seamlessly with holographic content, virtual objects, or smart devices, all without the need for controllers. To expand this innovative technology, Clay AIR collaborates with major OEMs, including Qualcomm, Lenovo, and Bose.
During a recent interview, Thomas shared insights into his journey and the future of touchless technology:
Touchless technology is the future. But are we ready?
A World Without Touch? Future Of Touchless Technology | Why It Matters 5 | Full Episode
A series of questions led to an enlightening discussion:
- (A.S.Deller) What sparked your interest in STEM?
- (Thomas Amilien) My journey began at the age of fifteen when I started programming in high school. I pursued physics studies, which eventually led me to become a conductor. My passion for music intertwined with technology and science, leading me to explore sound stretching projects at the University of Compiegne, where I also lectured on the science of music.
Section 1.1: The Birth of Clay AIR
How did your interest in touchless and gesture recognition develop?
Our research combined mathematics, computer vision, and music. We aimed to allow conductors to direct electronic soundtracks using gesture recognition technology. It was during this research that I met Jean-Baptiste Guignard, who would eventually co-found Clay AIR with me. Our shared enthusiasm for mathematics and music drove us to explore how people could create music with their hands, leading to our larger vision of transforming human interaction with technology through gesture recognition.
Subsection 1.1.1: A Vision Realized
Can you describe the moment you recognized Clay AIR as your long-term commitment?
Clay AIR began as a passion project that took up more of our free time until it became clear that we had a genuine ambition to revolutionize technology interaction. We officially launched the company, dedicating ourselves fully to our vision and expanding our team to gain traction in the industry.
Section 1.2: Looking Ahead
What are your immediate and mid-term goals at Clay AIR?
Our aim is to introduce a new method of interacting with various technologies, from personal smart assistants to remote technicians in industrial settings. In the short term, we want to build a community of users, technologists, and designers who share our vision. While we bring the technology, the next step is to collaboratively develop the interface that connects technology and human interaction.
In the past year, we've witnessed a growing demand for touchless technology as brands seek to transform traditional touch screens into hygienic, touchless alternatives. Recognizing the difference between hand tracking for extended reality and touchless interactions, we are forming partnerships with leading consumer electronics brands to design innovative solutions.
Our engineering team is continually optimizing our models for performance and accuracy. As a hardware-agnostic company, we strive to deliver ready-to-use algorithms on any device without compromising on power efficiency.
How can gesture recognition improve accessibility for disabled individuals?
Gesture recognition technology, being intuitive and natural, can make interactions more accessible. Companies like Tobii are already achieving significant results in helping those with motor disabilities interact using gaze control. Moreover, in virtual reality, hand tracking can facilitate functional rehabilitation, allowing patients to receive care from home, which can lower costs and improve comfort.
Chapter 2: Challenges in Gesture Recognition
What are some of the most complex use cases for gesture recognition?
One major hurdle is what we refer to as the 'Minority Report Myth.' While conductors undergo training to maintain their arm positions, most people can only sustain such gestures for brief periods. Therefore, gesture recognition should be used judiciously to navigate menus or collaborate on 3D objects. User interfaces must allow for breaks or transitions to traditional methods of interaction.
Another challenge is the absence of haptic feedback. Unlike physical interactions, touchless technology lacks the tactile sensations that signal successful engagement. Solutions such as wearable devices can help bridge this gap by providing alternative sensory feedback, whether visual or auditory.
Do you foresee integrating facial and gait recognition with gesture recognition?
By understanding hand position, we can infer body positioning using advanced algorithms known as inverse kinematics. Currently, we can link gaze control with gesture recognition, allowing users to select items in menus using pinch gestures. Given the complexity of gesture recognition, we aim to incorporate object recognition and tracking into our future plans.
What benefits come from collaborating with various imaging devices like Leap Motion or Kinect?
Our approach differs significantly from solutions like UltraLeap and Kinect, which rely on additional sensors for precision. While these technologies excel in specific high-demand applications, our vision focuses on providing hand interaction capabilities to a broader audience without the need for extra hardware. By utilizing existing cameras, we achieve lower power consumption and promote scalability.
Are there useful datasets or open-source tools for those working on gesture recognition projects?
OpenCV and Google MediaPipe are excellent starting points. Additionally, we are excited to soon release our technology on Nreal's augmented reality glasses, opening up opportunities for developers to create applications utilizing hand tracking and gesture controls.
What societal impacts do you envision for gesture recognition technology?
Gesture recognition can significantly enhance accessibility, especially for individuals with mobility challenges. The technology's applications in pain management and rehabilitation offer substantial benefits. Furthermore, in sectors like automotive, gesture recognition can improve safety, while immersive virtual experiences can enhance training and educational methods.
As we navigate the ethical considerations surrounding artificial intelligence and computer vision, our philosophy emphasizes human-centered design. At Clay AIR, we view AI as a tool to augment human capabilities, requiring careful monitoring of its ethical implications.
Our team is dedicated to advancing machine learning and human-machine interactions. For further discussions on the future of interaction technology, feel free to connect with us via LinkedIn or our website.
Thank you for exploring this interview with Thomas Amilien, co-founder of Clay AIR.