IoT commands through gesture recognition
How to send and execute IoT commands through AI-based real-time gesture recognition without the need for complex programming?
Using google's Teachable Machine platform whith machine learning model for gesture recognition and this platform for sending IoT commands through Firebase or MQTT to microcontrollers or computers in a unique profile. Tutorial: https://youtu.be/qxKK3lcFE3Q
Steps:
◷ Go to Teachable Machine to generate the image models using your hand for gestures images with webcam. Make sure the background is white while you show hand gestures to the webcam.
◶ Name each class of images with an IoT command in a reference_content format (reference underline content) in key/value principle that you want to send, for example as 'pin5_1', 'pin5_0' or 'pin5_nothing'.
The reference, 'pin5' in this example, can be a pin or a memory address that will be accessed by the microcontroller or computer on the other side of the IoT network to read and execute desired commands. In this example they are '0', '1' and 'nothing".
◵ Export the generated model, copy the model URL and type your profile below along with the URL and the class names.
◴ After generating your profile website, press the 'Start' button and wait for the webcam to turn on.
◷ By identifying gestures with the webcam on your profile website, you can find the result of your commands on Firebase, as well as reading commands via MQTT by subscribing to profile/reference in the http://sanusb.org/mqtt/ interface.