<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Topics tagged with esp32]]></title><description><![CDATA[A list of topics that have been tagged with esp32]]></description><link>https://community.m5stack.com/tags/esp32</link><generator>RSS for Node</generator><lastBuildDate>Mon, 20 Apr 2026 22:07:49 GMT</lastBuildDate><atom:link href="https://community.m5stack.com/tags/esp32.rss" rel="self" type="application/rss+xml"/><pubDate>Invalid Date</pubDate><ttl>60</ttl><item><title><![CDATA[Подбор пароля к устройству.]]></title><description><![CDATA[<p dir="auto">Всем привет, я сделал простенький скрипт который подбирает пароль к устройству Andoird, выглядит он так -</p>
<pre><code>GUI b
DELAY 1000
STRING 1111
STRING 2222
STRING 3333
STRING 4444
STRING 5555
STRING 6666
STRING 7777
STRING 8888
STRING 9999
STRING 0000
STRING 0123
STRING 0124
STRING 0125
</code></pre>
<p dir="auto">но есть проблема, устройство блокируется. Можем ли мы решить эту проблему и как.<br />
Прошивка - bruce<br />
Устройство M5 Stick Plus2</p>
]]></description><link>https://community.m5stack.com/topic/7376/подбор-пароля-к-устройству</link><guid isPermaLink="true">https://community.m5stack.com/topic/7376/подбор-пароля-к-устройству</guid><dc:creator><![CDATA[nefor]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[Arduino Export binary names changed (m5stick_c_plus to m5stack_stickc_plus) why?]]></title><description><![CDATA[<p dir="auto">I've been using the export binary from Arduino for a couple years. These are put on a server so an OTA can occur.</p>
<p dir="auto">To my surprise, I made some changes, repeated my process - but the update never happened.</p>
<p dir="auto">That's when I found out the name of the binary file was changed from:<br />
<strong>m5stick_c_plus</strong> to   <strong>m5stack_stickc_plus</strong></p>
<p dir="auto">I haven't found any documentation on the name of the binary, but I assume it related to the board (and mine is still: <strong>M5StickCPlus</strong>).</p>
<p dir="auto">I had been using library <strong>2.0.4</strong> but then upgraded to <strong>2.0.7</strong><br />
and now I see the version changed to: <strong>0.4.3</strong><br />
And the web page says to use the M5Unified version. For me that's<br />
a bridge too far right now.</p>
<p dir="auto">Arduion is version 1.8.19</p>
<p dir="auto">Is there any documentation on the versions and how the binary names are created? I did just see this (thanks). <a href="https://docs.m5stack.com/en/platform" target="_blank" rel="noopener noreferrer nofollow ugc">https://docs.m5stack.com/en/platform</a></p>
<p dir="auto">I especially need to have a binary that is the same memory configuration as my 2.0.4 M5 and ESP32 since the OTA has to work for 100's of installed versions.</p>
<p dir="auto">thanks.</p>
]]></description><link>https://community.m5stack.com/topic/6722/arduino-export-binary-names-changed-m5stick_c_plus-to-m5stack_stickc_plus-why</link><guid isPermaLink="true">https://community.m5stack.com/topic/6722/arduino-export-binary-names-changed-m5stick_c_plus-to-m5stack_stickc_plus-why</guid><dc:creator><![CDATA[konacurrents]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[How to control a wired light switch with M5 Atom]]></title><description><![CDATA[<p dir="auto">The r-relay-unit looks promising. But how do I get power to the controller (M5 Atom, etc)?</p>
<p dir="auto">thanks,</p>
]]></description><link>https://community.m5stack.com/topic/6489/how-to-control-a-wired-light-switch-with-m5-atom</link><guid isPermaLink="true">https://community.m5stack.com/topic/6489/how-to-control-a-wired-light-switch-with-m5-atom</guid><dc:creator><![CDATA[konacurrents]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[EchoActions: Connecting thougths and Robotic Arm M5Stack]]></title><description><![CDATA[<p dir="auto"><em>This article is edited and reprinted with authorization from the author, kimizuka.<br />
Original article link: <a href="https://blog.kimizuka.org/entry/2024/02/01/141808" target="_blank" rel="noopener noreferrer nofollow ugc">https://blog.kimizuka.org/entry/2024/02/01/141808</a></em><br />
<img src="https://community.robotshop.com/forum/uploads/default/original/3X/0/5/05df83ba6703369f30333ac96c2b484afe279f24.gif" alt="alt text" class=" img-fluid img-markdown" /></p>
<h2>Introduction</h2>
<p dir="auto">Translate AirPods through the iPhone app to Express, then through python-shell to pymycobot, and finally synchronize with myCobot to synchronize the rotation angle of AirPods with the posture of myCobot 🤖.<br />
<img src="/assets/uploads/files/1708939784052-503ad5a1-9d02-4904-abc0-e02b369867ae-image-resized.png" alt="0_1708939783546_503ad5a1-9d02-4904-abc0-e02b369867ae-image.png" class=" img-fluid img-markdown" /><br />
Although it's unclear if there's a demand, I will provide a rough outline of the source code.</p>
<h2>Project Structure</h2>
<p dir="auto">This project primarily employs technologies such as headphone-motion, web servers, the express framework of node.js, python-shell, and pymycobot. Here is a brief introduction to these technologies:<br />
<strong>1.Headphone-Motion:</strong> Headphone Motion is a project that utilizes specific technology to track and utilize user head movements. Although the specifics may depend on the implementation method and platform used (such as iOS), it mainly demonstrates how to capture head movement data by connecting to the device's headphones (especially those smart headphones with built-in sensors). A more intuitive effect of this can be seen in the Headphone Motion Unity Plugin made by GitHub user anastasiadeana. It is capable of real-time tracking of the user's head movements, including tilting and rotating, which is a core technology of this project.<br />
<img src="https://community.robotshop.com/forum/uploads/default/original/3X/1/f/1f6f76ad65017b7dffd2609f2bc633ec444ef3bc.gif" alt="alt text" class=" img-fluid img-markdown" /><br />
<strong><a href="http://2.Web" target="_blank" rel="noopener noreferrer nofollow ugc">2.Web</a> Server:</strong> There are many types of servers, which provide data, services, or applications to other applications or devices. Servers perform tasks such as processing data requests, hosting websites, storing information, running business applications, and so on. In this project, the web server mainly handles receiving head movement data from the iOS application and passing these data to the script controlling the mycobot robot arm.</p>
<p dir="auto"><strong>3.Express-Node.js:</strong> Express is a fast, open, minimalistic web application framework for Node.js. It is designed for building web applications and APIs. It allows developers to set up middleware to respond to HTTP requests in a very fast and simple way, making the development of web applications quicker and easier.<br />
<a href="https://github.com/expressjs/express" target="_blank" rel="noopener noreferrer nofollow ugc">https://github.com/expressjs/express</a></p>
<p dir="auto"><strong>4.Pymycobot-Python:</strong> pymycobot is a Python library specifically designed for controlling the myCobot robot arm. This library provides a series of functions and interfaces that allow developers to communicate and control the myCobot robot arm directly through Python scripts. Using pymycobot, developers can write code to control the robot arm's movements, adjust its posture, execute preset action sequences, etc., making it widely applicable in education, research, automation, and more.</p>
<h2>iOS App</h2>
<p dir="auto">This application is based on an app I created previously, utilizing react-native-headphone-motion to access the sensors in AirPods through a React Native iOS application.</p>
<p dir="auto">It's an interesting project where, when you are detected bending your head down for a long time while wearing AirPods, a lamenting sound will remind you.<br />
<a href="https://twitter.com/i/status/1745588902981931443" target="_blank" rel="noopener noreferrer nofollow ugc">https://twitter.com/i/status/1745588902981931443</a><br />
<a href="https://blog.kimizuka.org/entry/2024/01/16/105208" target="_blank" rel="noopener noreferrer nofollow ugc">https://blog.kimizuka.org/entry/2024/01/16/105208</a></p>
<p dir="auto">One point to note is the change I made: I added a process to send POST requests to the web server in onDeviceMotionUpdates. Additionally, to avoid overburdening the server with POST requests on every update, I set it up to send a request at a minimum interval of 500ms.</p>
<p dir="auto">App.tsx</p>
<pre><code>useEffect(() =&gt; {
  const delay = 500;
  const handleDeviceMotionUpdates = onDeviceMotionUpdates((data) =&gt; {
     // If the time since the last request is less than 500ms, return
    if (Date.now() - lastUpdateTimeRef.current &lt; delay) {
      return;
    }

    // Post sensor values to the Web server
    axios.post(String(process.env.API_URL), { 
      pitch: data.attitude.pitchDeg || 0,
      roll: data.attitude.rollDeg || 0,
      yaw: data.attitude.yawDeg || 0
    }).then(() =&gt; {
      lastUpdateTimeRef.current = Date.now();
    }).catch((err) =&gt; {
      console.error(err);
      lastUpdateTimeRef.current = Date.now();
    });

    setPitch(data.attitude.pitch);
    setPitchDeg(data.attitude.pitchDeg);
    setRoll(data.attitude.roll);
    setRollDeg(data.attitude.rollDeg);
    setYaw(data.attitude.yaw);
    setYawDeg(data.attitude.yawDeg);
    setGravityX(data.gravity.x);
    setGravityY(data.gravity.y);
    setGravityZ(data.gravity.z);
    setRotationRateX(data.rotationRate.x);
    setRotationRateY(data.rotationRate.y);
    setRotationRateZ(data.rotationRate.z);
    setUserAccelerationX(data.userAcceleration.x);
    setUserAccelerationY(data.userAcceleration.y);
    setUserAccelerationZ(data.userAcceleration.z);
  });

  return () =&gt; {
    handleDeviceMotionUpdates.remove();
  };
}, []);

</code></pre>
<p dir="auto">In the POST request, I used axios, which can send asynchronous HTTP requests to REST endpoints and handle responses.<br />
Therefore, it's also necessary to add module imports.</p>
<pre><code>import axios from 'axios';
</code></pre>
<p dir="auto">Code</p>
<pre><code>import axios from 'axios'; // Added to simplify the POST request
import React, {
  useEffect,
  useRef, // Added to maintain a 500ms interval
  useState,
} from 'react';
import {Button, SafeAreaView, StyleSheet, Text} from 'react-native';
import {
  requestPermission,
  onDeviceMotionUpdates,
  startListenDeviceMotionUpdates,
  stopDeviceMotionUpdates,
} from 'react-native-headphone-motion';

const API_URL = 'http://localhost:3000'; // Fill in the URL to POST

export default function App() {
  const lastUpdateTimeRef = useRef&lt;number&gt;(0); // Added to keep track of the last update time
  const [pitch, setPitch] = useState(0);
  const [pitchDeg, setPitchDeg] = useState(0);
  const [roll, setRoll] = useState(0);
  const [rollDeg, setRollDeg] = useState(0);
  const [yaw, setYaw] = useState(0);
  const [yawDeg, setYawDeg] = useState(0);
  const [gravityX, setGravityX] = useState(0);
  const [gravityY, setGravityY] = useState(0);
  const [gravityZ, setGravityZ] = useState(0);
  const [rotationRateX, setRotationRateX] = useState(0);
  const [rotationRateY, setRotationRateY] = useState(0);
  const [rotationRateZ, setRotationRateZ] = useState(0);
  const [userAccelerationX, setUserAccelerationX] = useState(0);
  const [userAccelerationY, setUserAccelerationY] = useState(0);
  const [userAccelerationZ, setUserAccelerationZ] = useState(0);

  useEffect(() =&gt; {
    const delay = 500; // Store the update interval in a variable
    const handleDeviceMotionUpdates = onDeviceMotionUpdates(data =&gt; {
      if (Date.now() - lastUpdateTimeRef.current &lt; delay) {
        // Return if the update interval is not met
        return;
      }

      // Post sensor values to the Web server
      // Update lastUpdateTimeRef regardless of success or failure
      // For some reason, await was not used
      axios
        .post(String(API_URL), {
          pitch: data.attitude.pitchDeg || 0,
          roll: data.attitude.rollDeg || 0,
          yaw: data.attitude.yawDeg || 0,
        })
        .then(() =&gt; {
          lastUpdateTimeRef.current = Date.now();
        })
        .catch(err =&gt; {
          console.error(err);
          lastUpdateTimeRef.current = Date.now();
        });

      setPitch(data.attitude.pitch);
      setPitchDeg(data.attitude.pitchDeg);
      setRoll(data.attitude.roll);
      setRollDeg(data.attitude.rollDeg);
      setYaw(data.attitude.yaw);
      setYawDeg(data.attitude.yawDeg);
      setGravityX(data.gravity.x);
      setGravityY(data.gravity.y);
      setGravityZ(data.gravity.z);
      setRotationRateX(data.rotationRate.x);
      setRotationRateY(data.rotationRate.y);
      setRotationRateZ(data.rotationRate.z);
      setUserAccelerationX(data.userAcceleration.x);
      setUserAccelerationY(data.userAcceleration.y);
      setUserAccelerationZ(data.userAcceleration.z);
    });

    return () =&gt; {
      handleDeviceMotionUpdates.remove();
    };
  }, []);

  return (
    &lt;SafeAreaView style={styles.container}&gt;
      &lt;Button
        title={'requestPermission'}
        onPress={async () =&gt; {
          await requestPermission();
        }}
      /&gt;
      &lt;Button
        title={'startListenDeviceMotionUpdates'}
        onPress={async () =&gt; {
          await startListenDeviceMotionUpdates();
        }}
      /&gt;
      &lt;Button
        title={'stopDeviceMotionUpdates'}
        onPress={async () =&gt; {
          await stopDeviceMotionUpdates();
        }}
      /&gt;
      &lt;Text&gt;{lastUpdateTimeRef.current}&lt;/Text&gt;
      &lt;Text&gt;{`pitch: ${pitch}`}&lt;/Text&gt;
      &lt;Text&gt;{`pitchDeg: ${pitchDeg}`}&lt;/Text&gt;
      &lt;Text&gt;{`roll: ${roll}`}&lt;/Text&gt;
      &lt;Text&gt;{`rollDeg: ${rollDeg}`}&lt;/Text&gt;
      &lt;Text&gt;{`yaw: ${yaw}`}&lt;/Text&gt;
      &lt;Text&gt;{`yawDeg: ${yawDeg}`}&lt;/Text&gt;
      &lt;Text&gt;{`gravityX: ${gravityX}`}&lt;/Text&gt;
      &lt;Text&gt;{`gravityY: ${gravityY}`}&lt;/Text&gt;
      &lt;Text&gt;{`gravityZ: ${gravityZ}`}&lt;/Text&gt;
      &lt;Text&gt;{`rotationRateX: ${rotationRateX}`}&lt;/Text&gt;
      &lt;Text&gt;{`rotationRateY: ${rotationRateY}`}&lt;/Text&gt;
      &lt;Text&gt;{`rotationRateZ: ${rotationRateZ}`}&lt;/Text&gt;
      &lt;Text&gt;{`userAccelerationX: ${userAccelerationX}`}&lt;/Text&gt;
      &lt;Text&gt;{`userAccelerationY: ${userAccelerationY}`}&lt;/Text&gt;
      &lt;Text&gt;{`userAccelerationZ: ${userAccelerationZ}`}&lt;/Text&gt;
    &lt;/SafeAreaView&gt;
  );
}

const styles = StyleSheet.create({
  container: {
    flex: 1,
    alignItems: 'center',
    justifyContent: 'center',
    backgroundColor: 'white',
  },
});

</code></pre>
<p dir="auto">Actually, it would be more convenient to specify the API_URL directly in the application, but I implemented it this way considering speed.</p>
<h2>Web Server</h2>
<p dir="auto">I set up a local server on my Mac. To operate myCobot, I went through several setup steps, mainly to adapt to Mac computers. These steps included installing drivers for the robot arm, updating the firmware of mycobot 280, and other operations, all detailed in this article.<br />
<a href="https://blog.kimizuka.org/entry/2021/08/10/131812" target="_blank" rel="noopener noreferrer nofollow ugc">https://blog.kimizuka.org/entry/2021/08/10/131812</a><br />
I think creating a web server with Python would be smoother, but based on my skill set, using Node.js is the fastest method. So, I plan to quickly set up the server using Express. Communication with myCobot is conducted through Python, so for this part, I decided to use python-shell to implement it.</p>
<pre><code>require('dotenv').config(); // Used to pass the port of myCobot from outside
const express = require('express');
const { PythonShell } = require('python-shell'); // Used for communicating with myCobot
const app = express();
const http = require('http').Server(app);

const duration = 100; // If the delay (500ms) set on the application side is too small, it will cause problems

app.use(express.json());
app.post('/', (req, res) =&gt; {
  try {
    const angles = [0, 0, 0, 0, 0, 0];

    // For myCobot's joint information, refer to page 13 of https://www.elephantrobotics.com/wp-content/uploads/2021/03/myCobot-User-Mannul-EN-V20210318.pdf
    // The array stores the 6 joints in order from the bottom up
    // Each joint has a definite range of motion; ensure not to exceed this range    
    angles[0] = Math.max(-90, Math.min(req.body.yaw || 0, 90)); // J1
    angles[3] = Math.max(-90, Math.min(req.body.pitch || 0, 90)); // J4
    angles[5] = Math.max(-175, Math.min(req.body.roll || 0, 175)); // J6

    // myCobot connected via USB receives instructions from Python
    PythonShell.runString(
      `from pymycobot.mycobot import MyCobot; MyCobot('${ process.env.MY_COBOT_PORT }').send_angles([${ angles }], ${ duration })`,
      null,
      (err) =&gt; err &amp;&amp; console.error(err)
    );
  } catch (err) {
    console.error(err);
  }
  res.send(200);
});

try {
  const angles = [0, 0, 0, 0, 0, 0];

  // Resets posture at startup
  PythonShell.runString(
    `from pymycobot.mycobot import MyCobot; MyCobot('${ process.env.MY_COBOT_PORT }').send_angles([${ angles }], ${ duration })`,
    null,
    (err) =&gt; err &amp;&amp; console.error(err)
  );
} catch(err) {
  console.error(err);
}

http.listen(3000, '0.0.0.0');
</code></pre>
<p dir="auto">Because it's necessary to execute pymycobot through PythonShell, the pymycobot directory must be placed at the same level as app.js.<br />
<a href="https://github.com/elephantrobotics/pymycobot" target="_blank" rel="noopener noreferrer nofollow ugc">https://github.com/elephantrobotics/pymycobot</a><br />
Once everything is prepared and the PC is connected to myCobot,<br />
you can start the web server and pass the pitch, roll, and yaw values received through POST requests to myCobot.<br />
Although this time the sensor values of AirPods are sent from an iPhone application via POST, the source of the POST could be from anywhere. Therefore, I think setting up such a server could be potentially useful in the future.</p>
<p dir="auto">source code:<br />
<a href="https://github.com/kimizuka/mycobot-express/tree/example/airpods" target="_blank" rel="noopener noreferrer nofollow ugc">https://github.com/kimizuka/mycobot-express/tree/example/airpods</a></p>
<h2>Summary</h2>
<p dir="auto">This project showcases the integration of headphone motion detection with robotic control, illustrating the potential of combining human movement data with robotics. By capturing head motion through smart headphones and translating it into commands for a robotic arm, it enhances human-robot interaction and introduces innovative applications in automation and assistive technologies. This approach opens up new avenues for intuitive control mechanisms in robotics, making technology more accessible and interactive.<br />
Finally, thanks again to Kimizuka for sharing this case with us.Feel free to explore further or share your insights with us.</p>
]]></description><link>https://community.m5stack.com/topic/6153/echoactions-connecting-thougths-and-robotic-arm-m5stack</link><guid isPermaLink="true">https://community.m5stack.com/topic/6153/echoactions-connecting-thougths-and-robotic-arm-m5stack</guid><dc:creator><![CDATA[ElephantRobotics]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[Hand Gestures as the Remote: M5stack-Basic integrated robotic arm]]></title><description><![CDATA[<h2>Introduction</h2>
<p dir="auto">I am a freelancer specializing in machine learning and robotics technology. My passion began during a course in artificial intelligence in college, which inspired me to explore new methods of human-machine interaction. In particular, for the operation of robotic arms, I have always wanted to simplify their complexity to make them more intuitive and easier to use.<br />
The inspiration for this project stems from my love for innovative technology and the pursuit of improving the ways humans interact with machines. My goal is to develop a gesture-based robotic arm control system that allows non-professionals to operate it with ease. For this purpose, I chose Google's MediaPipe library for gesture recognition and used mycobot 320 m5 as the experimental platform.</p>
<h2>Technical Overview</h2>
<h3>Google MediaPipe</h3>
<p dir="auto">MediaPipe is an open-source cross-platform framework developed by Google, specifically designed for building various perception pipelines. This framework offers a wealth of tools and pre-built modules, enabling developers to easily build and deploy complex machine learning models and algorithms, especially in the field of image and video analysis.<br />
<img src="/assets/uploads/files/1705903584795-e88e975d-66a4-4fb1-ae66-a17af9a38989-image.png" alt="0_1705903584530_e88e975d-66a4-4fb1-ae66-a17af9a38989-image.png" class=" img-fluid img-markdown" /><br />
A notable feature of MediaPipe is its support for real-time gesture and facial recognition. It can efficiently process video streams and identify and track human gestures and facial features in real-time. This capability makes it incredibly useful in interactive applications, augmented reality (AR), virtual reality (VR), and robotics.</p>
<p dir="auto">You can try the gesture recognition online feature without needing to install anything.<br />
<a href="https://mediapipe-studio.webapps.google.com/home" target="_blank" rel="noopener noreferrer nofollow ugc">MediaPipe Studio</a><br />
Its easy-to-use API and comprehensive documentation make it easier to integrate this framework, making it very suitable for use in the fields of machine learning and computer vision.</p>
<h3>pymycobot</h3>
<p dir="auto">pymycobot is a Python API for serial communication and control of the mycobot robotic arm. This library is designed to facilitate developers in controlling the mycobot robotic arm using the Python language. It offers a series of functions and commands that allow users to control the movements and behavior of the robotic arm through programming. For example, users can use the library to get the angles of the robotic arm, send angle commands to control the movement of the arm, or get and send the coordinates of the robotic arm.<br />
The only standard for using this library is that it must be used with the mycobot series of robotic arms, which are specifically adapted for the mycobot.</p>
<h2>Product Introduction</h2>
<h3>myCobot 320 M5stack</h3>
<p dir="auto">The myCobot 320 M5 is a six-axis collaborative robotic arm developed by Elephant Robotics for users. It has a working radius of 350mm and a maximum load capacity of 1000g. The robotic arm is suitable for an open ROS simulation development environment and includes forward and inverse kinematics algorithms. It supports multiple programming languages, including Python, C++, Arduino, C#, and JavaScript, and is compatible with Android, Windows, Mac OSX, and Linux platforms. The versatility of the myCobot 320 M5 makes it suitable for a variety of development and integration applications.<img src="/assets/uploads/files/1705903643038-fa0820e5-486e-4975-aa08-f37d18f38009-image.png" alt="0_1705903642794_fa0820e5-486e-4975-aa08-f37d18f38009-image.png" class=" img-fluid img-markdown" /></p>
<h2>2D Camera</h2>
<p dir="auto">A 2D camera that can be mounted on the end of the mycobot320, communicating via a USB data cable. It can present the view seen from the end of the robotic arm.<br />
<img src="/assets/uploads/files/1705903657861-05668d04-0b5d-4509-b48a-e9b735cb2e45-image.png" alt="0_1705903657612_05668d04-0b5d-4509-b48a-e9b735cb2e45-image.png" class=" img-fluid img-markdown" /></p>
<h2>Development Process</h2>
<h3>Project Architecture</h3>
<p dir="auto">I have divided this project primarily into three functional modules:<br />
Gesture Recognition: This module is mainly used for the recognition of gestures, capable of returning information about what the gesture is, such as a thumbs-up, etc.<br />
Robotic Arm Control: This main function is used for setting the motion control of the robotic arm, including coordinate control, angle control, and so on.<br />
Program Logic: This is used to handle the logic of the program's operation, setting confirmation times for gestures, resetting recognition times, etc. These will be detailed further in subsequent sections.<img src="/assets/uploads/files/1705903676515-f030fd21-5f03-4406-b5c2-0cc418f9517e-image.png" alt="0_1705903676298_f030fd21-5f03-4406-b5c2-0cc418f9517e-image.png" class=" img-fluid img-markdown" /></p>
<h2>Compilation Environment</h2>
<p dir="auto">Operating System: Windows 11<br />
Programming Language: Python 3.9+<br />
Libraries:opencv,pymycobot,mediapipe,time</p>
<h2>Gesture Recognition</h2>
<p dir="auto">To perform gesture recognition, we first need to obtain a camera image. Here, we use the OpenCV library to access the camera feed.</p>
<pre><code>
import cv2

# Get camera stream, default camera - 0, external cameras in order - 1, 2, 3
cap = cv2.VideoCapture(1)

# Continuously acquire camera footage
while cap.isOpened():
    #Get the current image screen
    ret, frame = cap.read()
    # Convert BGR image to RGB
    rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)    
    # display screen on computer
    cv2.imshow('gesture control',frame)
    # Press the 'q' key to exit to avoid an infinite loop
    if cv2.waitKey(1) &amp; 0xFF == ord('q'):
        break     
</code></pre>
<p dir="auto"><img src="/assets/uploads/files/1705908927203-f6808bb2-e2f7-4c63-9a1a-314c498aa323-image.png" alt="0_1705908925777_f6808bb2-e2f7-4c63-9a1a-314c498aa323-image.png" class=" img-fluid img-markdown" /><br />
With this, the image capture from the camera is successful. Next, we use MediaPipe for gesture recognition.</p>
<pre><code>import mediapipe as mp

# Initialize the MediaPipe Hands module
mp_hands = mp.solutions.hands
hands = mp_hands.Hands()
mp_draw = mp.solutions.drawing_utils


# Process the image and detect hands
result = hands.process(rgb_frame)

if result.multi_hand_landmarks:
    for hand_landmarks in result.multi_hand_landmarks:
        mp_draw.draw_landmarks(frame, hand_landmarks, mp_hands.HAND_CONNECTIONS)
</code></pre>
<p dir="auto"><img src="/assets/uploads/files/1705908950913-048861da-f25e-4b6b-8a24-045e6addff45-image.png" alt="0_1705908950612_048861da-f25e-4b6b-8a24-045e6addff45-image.png" class=" img-fluid img-markdown" /><br />
The output after recognizing a gesture is precise in identifying each joint on the hand and names each joint point. MediaPipe Hands provides 21 key points (landmarks) for the hand, collectively depicting the structure of the hand, including the wrist and the joints of each finger. Taking the thumb as an example, there are four joints, which from bottom to top are CMC, MCP, IP, TIP.</p>
<pre><code>cmc: Carpometacarpal Joint
mcp:Metacarpophalangeal Joint
ip:Interphalangeal Joint
tip:tip
</code></pre>
<p dir="auto">Having these landmarks alone is not enough; we need to set a method to recognize specific gestures. For example, if we want to recognize a thumbs-up gesture, we analyze that during a thumbs-up, the tip of the thumb is at the highest point above the entire palm. This makes it much easier. As long as we determine that the tip of the thumb is higher than the tips of all other fingers in the image, then the gesture is identified as a thumbs-up. (Other methods of analysis can also be used.)<br />
Generally, we can obtain three attributes for each joint: X, Y, Z, representing the position of that joint in the image.</p>
<pre><code># Get the attributes of the thumb tip
thump_tip = hand_landmarks.landmark[mp.hands.HandLandmark.THUMB_TIP]

# Get the height of the thumb tip
thump_tip.y

# Determine thumbs up gesture
def is_thump_up(hand_landmarks):
    thumb_tip = hand_landmarks.landmark[mp_hands.HandLandmark.THUMB_TIP]
    index_tip = hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP]
    # Determine which joint is higher.
    if thumb_tip.y &lt; index_tip.y:
        return True
    
    return False
</code></pre>
<p dir="auto">If you want other gestures, you can also set a special identification method based on the characteristics of the hand shape. At this point, gesture recognition is completed.</p>
<h3>Robotic Arm Motion Control</h3>
<p dir="auto">Initially, my idea was that when the camera recognizes a gesture, it would send a control command to the robotic arm. Let's start with a simple action, setting the robotic arm to perform a nodding motion.<br />
The pymycobot library offers many functions that are very convenient for controlling the robotic arm.</p>
<pre><code>from pymycobot.mycobot import Mycobot
import time
# connect robot arm
mc = Mycobot(port,baud)

#Control the movement of the robotic arm using angles
mc.send_angles([angles_list],speed)

#Control the movement of the robotic arm using coordinates
mc.send_coords([coords_list],speed,mode)

# Nodding action
def ThumpUpAction(self):
        self.mc.send_angles([0.96, 86.22, -98.26, 10.54, 86.92, -2.37], 60)
        time.sleep(1.5)
        for count in range(3):
            self.mc.send_angles([0.79, 2.46, (-8.17), 4.3, 88.94, 0.26], 70)
            time.sleep(1)
            self.mc.send_angles([(-3.6), 30.32, (-45.79), (-46.84), 97.38, 0.35], 70)
            time.sleep(1)
        self.mc.send_angles([0.79, 2.46, (-8.17), 4.3, 88.94, 0.26], 70)
        time.sleep(1)
        self.mc.send_angles([0.96, 86.22, -98.26, 10.54, 86.92, -2.37], 60)
</code></pre>
<p dir="auto">To enhance the readability and modifiability of the overall code, it's beneficial to create a robotic arm class for easy calling and modification.</p>
<pre><code>class RobotArmController:

        def __init__(self,port):
        self.mc = MyCobot(port, 115200)
        self.init_pose = [0.96, 86.22, -98.26, 10.54, 86.92, -2.37]
        self.coords = [-40, -92.5, 392.7, -92.19, -1.91, -94.14]
        self.speed = 60
        self.mode = 0
        
        def ThumpUpAction(self):
            ...
            
        def OtherAction(self):
            ...
</code></pre>
<h2>Program Logic Processing</h2>
<p dir="auto">During debugging, some issues arose. When recognizing gestures, continuous recognition meant that if a gesture was recognized 10 times in 1 second, 10 commands would be sent to the robotic arm. This was not what I initially envisioned.<br />
Therefore, logical adjustments were needed. Here's how I addressed it:</p>
<pre><code># Set a 2-second timeframe to confirm the gesture. Only when a thumbs-up gesture is maintained for 2 seconds, the command to control the robotic arm is issued, using a control variable approach.

#init
#Variable to detect whether gesture exists
gesture_detected = False
#Variable that determines the timing after the gesture appears
gesture_start_time = None
# Set the variable 2s after the gesture appears
gesture_confirmation_time = 2

# When a specific gesture appears, gesture_start_time begins to count. During this period, continuous checks are made. If 2 seconds have passed, the gesture is confirmed, and then the corresponding robotic arm movement for that gesture is executed.

current_time = time.time()
if current_gesture:
    if not gesture_detected:
        gesture_detected = True
        gesture_start_time = current_time
    elif current_time - gesture_start_time &gt; gesture_confirmation_time and not action_triggered:
        if current_gesture == "thumb_up":
            robotic arm action()
</code></pre>
<p dir="auto">However, this is still not sufficient, as the hand maintaining the gesture for over 2 seconds would continue sending commands to the robotic arm. Here, we need to set a cooldown period to allow sufficient time for the robotic arm to complete its movement.</p>
<pre><code>action_triggered = False
cooldown_start_time = None
cooldown_period = 2

# process gesture
                current_time = time.time()
                if current_gesture:
                    if not gesture_detected:
                        gesture_detected = True
                        gesture_start_time = current_time
                    elif current_time - gesture_start_time &gt; gesture_confirmation_time and not action_triggered:
                        #Perform corresponding actions based on gestures
                        if current_gesture == "thumb_up":
                            print('good good')
                            mc.thum_up()
                        elif current_gesture == "palm_open":
                            print('forward')
                            mc.increment_x_and_send()
                        # You can add more gestures and corresponding action judgments
                        action_triggered = True
                        cooldown_start_time = current_time
                else:
                    gesture_detected = False
                    gesture_start_time = None
                    if action_triggered and current_time - cooldown_start_time &gt; cooldown_period:
                        print('can continue')
                        action_triggered = False
                        cooldown_start_time = None

</code></pre>
<h2>Video</h2>
<p dir="auto"><a href="https://youtu.be/9vOPKO_IG9M" target="_blank" rel="noopener noreferrer nofollow ugc">https://youtu.be/9vOPKO_IG9M</a><br />
Summary<br />
This project demonstrates a method of using gesture recognition to control the myCobot 320, creating a new form of human-machine interaction. Although currently only a limited number of gestures and corresponding robotic arm movements have been implemented, it lays the groundwork for broader applications of robotic arms in the future. The innovative attempt to combine gestures with robotic arm control has not only improved my programming skills but also enhanced my problem-solving abilities, providing valuable experience for future related projects.</p>
]]></description><link>https://community.m5stack.com/topic/6004/hand-gestures-as-the-remote-m5stack-basic-integrated-robotic-arm</link><guid isPermaLink="true">https://community.m5stack.com/topic/6004/hand-gestures-as-the-remote-m5stack-basic-integrated-robotic-arm</guid><dc:creator><![CDATA[ElephantRobotics]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[AI-Enhanced Robotics, GPT-4V and myCobot in Action]]></title><description><![CDATA[<p dir="auto"><em>This article on the mylangrobot project was created by neka-nat, and the author Shirokuma has authorized its editing and reproduction. A big shout out at neka-nat:)<br />
<a href="https://twitter.com/neka_nat" target="_blank" rel="noopener noreferrer nofollow ugc">https://twitter.com/neka_nat</a></em></p>
<p dir="auto"><img src="https://hackster.imgix.net/uploads/attachments/1671536/image_6gzeG2cNR1.png?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /></p>
<h2>Introduction</h2>
<p dir="auto">This project demonstrates a simple pick-and-place operation using a robotic arm, employing GPT-4V and myCobot. The demonstration uses a method named SoM (Object Detection Method) to generate robotic movements through natural language. In simpler terms, the machine receives natural language instructions, locates the target, and then commands the robotic arm to perform the grabbing task. The highlight of this project is the combination of GPT-4V's image processing and the SoM object detection algorithm, enabling the robotic arm to move through natural language interaction with the machine.</p>
<h2>Software</h2>
<h3>SoM</h3>
<p dir="auto">Set of Mark (SoM) is a method designed to enhance the visual understanding capabilities of large language models. After an image is processed with SoM, a series of markers are added to it. These markers can be recognized and processed by language models, aiding in more accurate identification and understanding of objects and content within the image. This enhancement allows language models to conduct more precise analysis and descriptions of elements in images, thereby improving their performance in visual tasks.<br />
<img src="https://hackster.imgix.net/uploads/attachments/1671537/_bwKXYoSNej.blob?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /></p>
<h3>GPT-4V</h3>
<p dir="auto">The commonly discussed GPT is a large predictive model with which we can have conversational interactions. The new iteration, GPT-4V, is a multimodal large language model that not only processes textual information but now also handles image information. It is capable of image understanding, image generation, and image description. This large model combines the powerful natural language processing abilities of GPT-4 with current image analysis technologies, offering more efficient and accurate integrated capabilities in vision and language.<br />
Below is <a href="https://cookbook.openai.com/examples/gpt_with_vision_for_video_understanding" target="_blank" rel="noopener noreferrer nofollow ugc">an example provided by OpenAI:</a></p>
<pre><code>User:
"These are frames from a video that I want to upload. Generate a compelling description that I can upload along with the video."
</code></pre>
<p dir="auto"><img src="https://hackster.imgix.net/uploads/attachments/1671538/image_ImW1wSoFMn.png?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /></p>
<pre><code>#GPT-4V Image description processing
GPT4:
"🐺 Survival of the Fittest: An Epic Tale in the Snow ❄️ - Witness the intense drama of nature as a pack of wolves face off against mighty bison in a harsh winter landscape. This raw footage captures the essence of the wild where every creature fights for survival. With each frame, experience the tension, the strategy, and the sheer force exerted in this life-or-death struggle. See nature's true colors in this gripping encounter on the snowy plains. 🦬"

Remember to respect wildlife and nature. This video may contain scenes that some viewers might find intense or distressing, but they depict natural animal behaviors important for ecological studies and understanding the reality of life in the wilderness.
</code></pre>
<h3>Openai-whisper</h3>
<p dir="auto"><a href="https://openai.com/research/whisper" target="_blank" rel="noopener noreferrer nofollow ugc">https://openai.com/research/whisper</a><br />
An automatic speech recognition (ASR) system developed by OpenAI. It utilizes advanced deep learning techniques to transcribe, translate, and understand human speech. Whisper is characterized by its ability to support multiple languages and its exceptional performance in recognizing voices with various accents and in noisy environments. Additionally, it can handle different audio qualities, making it suitable for various applications such as transcribing meeting records, automatically generating subtitles, and assisting in translations.<br />
<img src="https://hackster.imgix.net/uploads/attachments/1671539/image_hRNiVtcWKF.png?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /></p>
<h3>pymycobot</h3>
<p dir="auto"><a href="https://github.com/elephantrobotics/pymycobot/" target="_blank" rel="noopener noreferrer nofollow ugc">https://github.com/elephantrobotics/pymycobot/</a><br />
pymycobot is a Python library for the mycobot robot series. MyCobot is a compact, multifunctional collaborative robotic arm suitable for education, research, and lightweight industrial applications. The PyMyCobot library provides a simple programming interface, enabling developers to control and program MyCobot robots for operations such as movement, grabbing, and sensing. This library supports multiple operating systems and development environments, facilitating its integration into various projects, especially in robotics and automation. By using Python, a widely-used programming language, pymycobot makes operating and experimenting with MyCobot robots more accessible and flexible.</p>
<h2>Hardware</h2>
<h3>myCobot 280M5</h3>
<p dir="auto">The myCobot 280 M5 is a desktop-level compact six-axis collaborative robot produced by Elephant Robotics. Designed for compactness, it is suitable for education, research, and light industrial applications. The myCobot 280 M5 supports various programming and control methods, compatible with different operating systems and programming languages, including:<br />
●Main and auxiliary control chips: ESP32<br />
●Supports Bluetooth (2.4G/5G) and wireless (2.4G 3D Antenna)<br />
●Multiple input and output ports<br />
●Supports free movement, joint movement, Cartesian movement, trajectory recording, and wireless control<br />
●Compatible operating systems: Windows, Linux, MAC<br />
●Supported programming languages: Python, C++, C#, JavaScript<br />
●Supported programming platforms and tools: RoboFlow, myblockly, Mind+, UiFlow, Arduino, mystudio<br />
●Supported communication protocols: Serial port control protocol, TCP/IP, MODBUS<br />
These features make the myCobot 280 M5 a versatile, user-friendly robot solution suitable for a variety of application scenarios.<br />
<img src="https://hackster.imgix.net/uploads/attachments/1671541/_jrCeeyf5SM.blob?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /></p>
<h3>myCobot Vertical Suction Pump V2.0</h3>
<p dir="auto">Operates on the principle of vacuum adhesion, providing 3.3V IO control, and can be extensively used in the development of various embedded devices.<br />
<img src="https://hackster.imgix.net/uploads/attachments/1671545/image_95m3eTMvvl.png?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /></p>
<h3>Camera</h3>
<p dir="auto">Standard USB and LEGO interfaces. The USB interface can be used with various PC devices, and the LEGO interface can be conveniently fixed. It is applicable to machine vision, image recognition, and other applications.<br />
<img src="https://hackster.imgix.net/uploads/attachments/1671547/image_Vsk67uhSCo.png?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /><br />
mylangrobot Software Analysis<br />
The specific workflow of the project described at the beginning is as follows:</p>
<ol>
<li>Audio Input: Record audio instructions first.</li>
<li>Audio Processing: Use "openai-whisper" to process the audio and convert it into text.</li>
<li>Language Model Interaction: Use the GPT-4 model to process the converted text instructions and understand the user's commands.</li>
<li>Image Processing: Use GPT-4V and the enhanced image capability of SoM to process images and find the target mentioned in the instructions.</li>
<li>Robotic Arm Control: Control the robotic arm to grab the identified target.</li>
</ol>
<p dir="auto">Audio Processing<br />
This function utilizes speech_recognition to capture audio data from the microphone, enabling the computer to recognize it.<br />
Libraries used:</p>
<pre><code>import io
import os
from enum import Enum
from typing import Protocol

import openai
import speech_recognition as sr
from pydub import AudioSegment
from pydub.playback import play
</code></pre>
<p dir="auto">Define interfaces, capture user input, and provide output to the user.</p>
<pre><code>class Interface(Protocol):
    def input(self, prefix: str = "") -&gt; str:
        return prefix + self._input_impl()

    def _input_impl(self) -&gt; str:
        ...

    def output(self, message: str) -&gt; None:
        ...
</code></pre>
<p dir="auto">Initialize the microphone for audio input and output.</p>
<pre><code>class Audio(Interface):
    def __init__(self):
        self.r = sr.Recognizer()
        self.mic = sr.Microphone()
        # openai-whisper API key
        self.client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
</code></pre>
<p dir="auto">Convert the input audio into text format for output.</p>
<pre><code> def _input_impl(self) -&gt; str:
        print("Please tell me your command.")
        with self.mic as source:
            self.r.adjust_for_ambient_noise(source)
            audio = self.r.listen(source)

        try:
            return self.r.recognize_whisper(audio, language="japanese")

        except sr.UnknownValueError:
            print("could not understand audio")
        except sr.RequestError as e:
            print("Could not request results from Google Speech Recognition service; {0}".format(e))
</code></pre>
<p dir="auto">The final return 'r' is the text format of the audio, which can be used for interaction with the GPT-4 model.</p>
<h3>Image Processing and GPT-4 Language Interaction</h3>
<p dir="auto">When transmitting text to the GPT-4 model for interaction, images are sent along, so image processing and interaction are discussed together.<br />
Libraries used for image processing:</p>
<pre><code>import cv2
import numpy as np
import supervision as sv
import torch
from segment_anything import SamAutomaticMaskGenerator, sam_model_registry

from .utils import download_sam_model_to_cache
</code></pre>
<p dir="auto">Primarily uses the SamAutomaticMaskGenerator feature to mark and draw markers on detected targets.</p>
<pre><code>#Convert image to RGB format
        image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
#Image processing, target detection and marker rendering
        sam_result = self.mask_generator.generate(image_rgb)
        detections = sv.Detections.from_sam(sam_result=sam_result)
        height, width, _ = image.shape
        image_area = height * width

        min_area_mask = (detections.area / image_area) &gt; self.MIN_AREA_PERCENTAGE
        max_area_mask = (detections.area / image_area) &lt; self.MAX_AREA_PERCENTAGE
        detections = detections[min_area_mask &amp; max_area_mask]
        
        
        #Returns the result of the image and detected information
        labels = [str(i) for i in range(len(detections))]
        annotated_image = mask_annotator.annotate(scene=image_rgb.copy(), detections=detections)
        annotated_image = label_annotator.annotate(scene=annotated_image, detections=detections, labels=labels)
        return annotated_image, detections

</code></pre>
<p dir="auto"><img src="https://hackster.imgix.net/uploads/attachments/1671548/image_aiT6dtqGse.png?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /><br />
<img src="https://hackster.imgix.net/uploads/attachments/1671549/image_j8UHTvji8R.png?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /><br />
This results in the following effect.<br />
Note: The below function requires obtaining the GPT-4 API-Key for usage.<br />
The resulting image is passed to the GPT-4 model, which requires some processing before use. Through GPT-4V, the image can be processed to return information about the image content and corresponding object information.</p>
<pre><code>def prepare_inputs(message: str, image: np.ndarray) -&gt; dict:
    # # Path to your image
    # image_path = "temp.jpg"
    # # Getting the base64 string
    base64_image = encode_image_from_cv2(image)

    payload = {
        "model": "gpt-4-vision-preview",
        "messages": [
            {"role": "system", "content": [metaprompt]},
            {
                "role": "user",
                "content": [
                    {
                        "type": "text",
                        "text": message,
                    },
                    {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{base64_image}"}},
                ],
            },
        ],
        "max_tokens": 800,
    }

    return payload


def request_gpt4v(message: str, image: np.ndarray) -&gt; str:
    payload = prepare_inputs(message, image)
    response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload)
    res = response.json()["choices"][0]["message"]["content"]
    return res
</code></pre>
<p dir="auto">Robotic Arm Control and Overall Integration<br />
After image processing and GPT-4V model processing, the interpreted instructions generate target position information. This position information is passed to the robotic arm control system, which moves to the corresponding location to perform the grabbing action.<br />
Key methods involved:<br />
Move to the target object.</p>
<pre><code>    def move_to_object(self, object_no: int, speed: Optional[float] = None) -&gt; None:
        object_no = self._check_and_correct_object_no(object_no)
        print("[MyCobotController] Move to Object No. {}".format(object_no))
        detection = (
            np.array([-self._detections[object_no][0], -self._detections[object_no][1&rsqb;&rsqb;) + self.capture_coord.pos[:2]
        )
        print("[MyCobotController] Object pos:", detection[0], detection[1])
        self.move_to_xy(detection[0], detection[1], speed)
</code></pre>
<pre><code>grab action
  def grab(self, speed: Optional[float] = None) -&gt; None:
        print("[MyCobotController] Grab to Object")
        current_pos = self.current_coords().pos
        self.move_to_z(self.object_height + self.end_effector_height, speed)
        self._mycobot.set_basic_output(self._suction_pin, 0)
        time.sleep(2)
        self.move_to_z(current_pos[2], speed)

drop action
    def move_to_place(self, place_name: str, speed: Optional[float] = None) -&gt; None:
        print("[MyCobotController] Move to Place {}".format(place_name))
        self._current_position = self.positions[place_name]
        self._mycobot.sync_send_angles(
            np.array(self._current_position) + self.calc_gravity_compensation(self._current_position),
            speed or self._default_speed,
            self._command_timeout,
        )
        print("Current coords: {}".format(self.current_coords()))
</code></pre>
<p dir="auto">After each function is implemented, coordinate the entire process, streamline the workflow logic, and complete the task.<br />
The specific code can be viewed in the <a href="http://operator.py" target="_blank" rel="noopener noreferrer nofollow ugc">operator.py</a> file.</p>
<h2>Example</h2>
<p dir="auto">Below is an example test to observe the project's outcome. The content involves a voice input saying "pick up the chocolate," and the robotic arm executes the task.<br />
<a href="https://youtu.be/Eda1m7DnIhQ" target="_blank" rel="noopener noreferrer nofollow ugc">https://youtu.be/Eda1m7DnIhQ</a></p>
<h2>Summary</h2>
<p dir="auto">This project demonstrates how to leverage advanced artificial intelligence and robotics technologies to accomplish complex automation tasks. By integrating voice recognition, natural language processing, image analysis, and precise robotic arm control, the project has successfully created a robotic system capable of understanding and executing spoken instructions. This not only enhances the naturalness and efficiency of robot-human interaction but also opens up new possibilities for robotic technology in various practical applications, such as automated manufacturing, logistics, assistive robots, and more.<br />
Finally, thanks again to Shirokuma for sharing this case with us. If you have better examples, feel free to contact us!</p>
]]></description><link>https://community.m5stack.com/topic/5970/ai-enhanced-robotics-gpt-4v-and-mycobot-in-action</link><guid isPermaLink="true">https://community.m5stack.com/topic/5970/ai-enhanced-robotics-gpt-4v-and-mycobot-in-action</guid><dc:creator><![CDATA[ElephantRobotics]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[Creating a Synchronized Robotic Arm Demo: Step-by-Step Guide]]></title><description><![CDATA[<h2>Introduction</h2>
<p dir="auto">Today, I am going to present a robotic arm model that I have independently designed and implemented. The core feature of this model is to achieve real-time gesture tracking - just a gentle drag with your hand, and the robotic arm can immediately follow your movements.</p>
<p dir="auto">The reason why I wanted to create such a model is that in some dangerous environments, we can use robotic arms to replace manual work, thereby avoiding threats to human life.</p>
<p dir="auto">You might ask, why not directly use remote keyboard control, joystick control, or APP control, but choose to manually drag? I believe that only manual operation can satisfy our need for precision to the greatest extent. Therefore, I decided to start trying to make this model and have initially completed the entire demo.</p>
<p dir="auto">I hope that through this demo, I can demonstrate the infinite possibilities of robotic arms to everyone, and at the same time, I hope to inspire everyone's infinite longing for future technology.</p>
<h2>Robotic Arm</h2>
<p dir="auto">The mechArm 270 is a 6 DOF robotic arm, with a compact structure design that can fit into a backpack for easy transportation. Importantly, it has many open control APIs, which allows you to quickly start controlling the robotic arm using Python. There are no complicated operations, and it even supports graphical programming, which allows people who are not very familiar with code to quickly get started with controlling the robotic arm.<br />
<img src="https://hackster.imgix.net/uploads/attachments/1644514/_8bvUAicseK.blob?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /><br />
The mechArm is a desktop robotic arm with a structure that mimics industrial designs. Its maximum working radius is 270mm, it can carry a load of 250g, and its repeat positioning accuracy is controlled within ±0.5mm.</p>
<h2>Project</h2>
<p dir="auto">After introducing the basic equipment, let's start with the record of how I created this demo.</p>
<p dir="auto">Environment:<br />
Operating system: Windows 11</p>
<p dir="auto">Programming language: Python 3.9+</p>
<p dir="auto">Python libraries: pymycobot, time</p>
<p dir="auto">pymycobot is an open-source library for Elephant Robotics, specifically designed to control the robotic arm of Elephant Robotics. Here is an example of a simple control code.</p>
<p dir="auto">Code：</p>
<pre><code>#Main methods used

#Create objects to communicate with the robotic arm.
MyCobot(serial,baud)
# angles control robot,
send_angles([list_angles],speed)
# coords control robot
send_coords([list_coords],speed,mode)

Example:

import time
from pymycobot.mycobot import MyCobot

# create a object
mc = MyCobot("com7",115200)

# angles control
mc.send_angles([0,0,0,0,0,0],100)
time.sleep(1)
mc.send_angles([90,90,90,90,90,90],100)
time.sleep(1)
</code></pre>
<p dir="auto">I briefly introduced how to use python to control mechArm. Isn’t it easy?</p>
<h2>Problem Analysis</h2>
<p dir="auto">Before starting the project, it's important to set up a framework and understand the specific problems we need to solve. For this, I made a flowchart of the project. In the following, I will refer to the manually controlled robotic arm as R1, and the following motion robotic arm as R2.<br />
<img src="https://hackster.imgix.net/uploads/attachments/1644515/_Glx8vYpjQD.blob?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /><br />
Control Robotic Arm: As mentioned above, the robotic arm can be controlled using the methods provided by the pymycobot library.</p>
<p dir="auto">Motion Control Methods: The R1 robotic arm can be dragged by hand, returning the current angle information of the robotic arm at all times. The R2 robotic arm controls based on the angle information received from R1.</p>
<p dir="auto">Communication Between Robotic Arms: This step is quite important in the entire project. Once established, the robotic arm can easily implement information transmission.</p>
<p dir="auto">Next, I will mainly explain the Motion Control Methods and Communication Between Robotic Arms.</p>
<h2>Motion Control methods</h2>
<ol>
<li>Get real-time angle information</li>
</ol>
<p dir="auto">pymycobot provides the "get_angles()" method to obtain the angle information of the current robot arm.</p>
<pre><code># Can obtain the current angle information of the robotic arm in real time
get_angles()

# example
print("real-time angles:",mc.get_anlges())

result: real-time angles:[0,0,0,0,0,0]

# Continuously obtain the current angle
while True:
    angels = mc.get_angles()
    print(angles) 
    time.sleep(0.1) #Go to the next step every 0.1s
</code></pre>
<ol start="2">
<li>Set the Refresh Mode for the Robotic Arm</li>
</ol>
<p dir="auto">The refresh mode of the robotic arm mainly falls into two categories: interpolation mode and non-interpolation mode. These refer to the ways in which the end effector of the robotic arm is controlled during motion trajectory planning. If no mode is set, the robotic arm may not be able to perform the expected motion correctly, which may lead to the following consequences:</p>
<ol>
<li>
<p dir="auto">Unsmooth motion</p>
</li>
<li>
<p dir="auto">Inaccurate motion</p>
</li>
<li>
<p dir="auto">Discontinuous motion</p>
</li>
</ol>
<p dir="auto">Interpolation Mode: The interpolation mode can realize smooth and continuous trajectory planning, ensuring that the position and posture of the end effector of the robotic arm transition smoothly during the motion process.</p>
<p dir="auto">Non-Interpolation Mode: The non-interpolation mode means that the robotic arm only focuses on specific target points during the motion process, without performing interpolation calculations. Under the non-interpolation mode, the position and posture of the robotic arm will jump directly between key points, without undergoing a smooth transition.</p>
<p dir="auto">When multiple robotic arms use the interpolation mode for motion at the same time, there may be situations of waiting or queuing. Therefore, we choose to use the non-interpolation mode.</p>
<pre><code>#Set refresh mode
set_fresh_mode(1/0) 
1：no interpolation
0：interpolation

mc.set_fresh_mode(1)
</code></pre>
<p dir="auto">Our code that integrates the previous ones is as follows.</p>
<p dir="auto">Code:</p>
<pre><code>import time
from pymycobot.mycobot import MyCobot

mc = MyCobot("COM7", 115200)    #release arm
mb = MyCobot("COM11", 115200)   #move arm

mb.set_fresh_mode(1)    #no interpolation
time.sleep(1)
mc.release_all_servos() #release robot
time.sleep(1)
speed = 100
while True:
    angles = mc.get_angles()    #get release arm angles
    mb.send_angles(angles, speed)   #send angles to move arm
    time.sleep(0.1)
## Communication between robotic arms：
Our solution is to connect two robotic arms to the same PC and connect them through a serial port.

# build connection
from pymycobot.mycobot import MyCobot

mc = MyCobot("COM7", 115200)   
mb = MyCobot("COM11", 115200)
</code></pre>
<p dir="auto">By using the most basic USB data cables for connection, we have two serial port numbers for the robotic arms on our computer, and we can send instructions to them separately.<br />
<img src="/assets/uploads/files/1699432440269-%E5%BE%AE%E4%BF%A1%E5%9B%BE%E7%89%87_20231108163344.png" alt="0_1699432439569_微信图片_20231108163344.png" class=" img-fluid img-markdown" /><br />
<a href="https://www.youtube.com/watch?v=NByjgoqc2O4" target="_blank" rel="noopener noreferrer nofollow ugc">https://www.youtube.com/watch?v=NByjgoqc2O4</a></p>
<h2>Summary</h2>
<p dir="auto">From the content, it can be seen that although we can achieve about 70-80% synchronization, there are other factors that can cause significant delays. The reasons for the delays could be various, such as the speed of data processing and transmission, the reaction speed of the robotic arm, software optimization, hardware performance, etc. All of these are potential factors that can cause delays.</p>
<p dir="auto">In addition, there is a significant limitation in that their communication is connected through serial ports. If the distance is a bit further, this method cannot be used, and its practicality is not strong. In the future, I will try to use wireless connections such as Bluetooth and WiFi to control the robotic arm.</p>
]]></description><link>https://community.m5stack.com/topic/5785/creating-a-synchronized-robotic-arm-demo-step-by-step-guide</link><guid isPermaLink="true">https://community.m5stack.com/topic/5785/creating-a-synchronized-robotic-arm-demo-step-by-step-guide</guid><dc:creator><![CDATA[ElephantRobotics]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[Harnessing the Power of ROS and MoveIt in Robotic Arm Manipulation]]></title><description><![CDATA[<h2>Introduction</h2>
<p dir="auto">This article will document my experience using myCobot 280 M5stack in ROS. Why use ROS? Because when it comes to robotics, it's hard to avoid the ROS operating system.<br />
Today is our first time using this system. Today, I will start with an introduction to ROS, the configuration of the environment, and the use of mycobot280 in ROS.</p>
<h2>ROS</h2>
<p dir="auto">ROS (Robot Operating System) is a flexible framework for writing robot software. It is a semi-open source project that provides researchers and developers with a set of tools and libraries to help them create various robot applications. ROS is mainly used for research and development of various robot systems, such as autonomous vehicles, service robots, industrial robots, drones, etc. It makes the development of robots more efficient and enhances the functionality and flexibility of robots. The two most notable modules in the ROS operating system are navigation and robotic arm control (moveit). Moveit allows users to quickly build robotic arm models and implement robotic arm control, such as modeling, kinematic solution, motion planning, obstacle avoidance, and more. Given such a powerful system, I want to use it to simulate some functions of the robotic arm, such as obstacle avoidance, path planning, etc.<br />
<img src="https://community.robotshop.com/forum/uploads/default/original/3X/e/9/e97ac6f06d51ef35cc4eb851b239108eed920b47.png" alt="alt text" class=" img-fluid img-markdown" /></p>
<h2>Environment configuration</h2>
<p dir="auto">ROS primarily supports operating systems like Ubuntu (Linux) and Mac OS (with partial support). The most common usage is on the Ubuntu system. My computer is installed with Ubuntu 18.04 version. The corresponding ROS version for Ubuntu 18 is Melodic. Different versions of Ubuntu correspond to different ROS versions.</p>
<pre><code>ubuntu16.04-kinetic
ubuntu18.04-melodic
ubuntu20.04-noetic
</code></pre>
<p dir="auto">Since my computer has already had the system and ROS environment installed, I won’t record each step here. The main focus will be on the operations within ROS. For the installation of the Ubuntu system and the setup of the ROS environment, you can search on Google. The ROS official website also has detailed installation tutorials.<br />
The scenario I’m using today involves controlling a robotic arm with MoveIt, so the demand for computer hardware is not high. Please note that if you are using features like SLAM radar navigation, you might need a higher-spec computer to handle the computational power. The standard to check whether ROS has been successfully installed is to open the terminal and type “roscore”. If the content that appears is the same as the image below, it means the installation was successful.<br />
<img src="https://community.robotshop.com/forum/uploads/default/original/3X/8/f/8ff4cf94f5fd258d418565c044be3c87cb07c988.png" alt="alt text" class=" img-fluid img-markdown" /><br />
Let’s use the ROS!</p>
<h2>Project</h2>
<p dir="auto">To create a robotic arm in ROS and use some path planning features, you need to perform the following steps:</p>
<ul>
<li>
<p dir="auto">list itemCreate a URDF file, which is the model of the robotic arm.</p>
</li>
<li>
<p dir="auto">list itemLoad the URDF file into the ROS server.</p>
</li>
<li>
<p dir="auto">list itemUse RViz for visualization.</p>
</li>
<li>
<p dir="auto">list itemCreate a MoveIt configuration package.</p>
</li>
<li>
<p dir="auto">list itemUse MoveIt for motion planning.</p>
</li>
</ul>
<h3>Creating a URDF file</h3>
<p dir="auto">Creating a URDF file First, we need to create a URDF (Unified Robot Description Format) file for the robotic arm. This file is crucial as it describes the properties of the robotic arm, such as joints, links, and sensors, in the form of a 3D model. Since the product is the myCobot 280 M5Stack robotic arm from Elephant Robotics, Elephant Robotics has provided a URDF file for the myCobot 280 on Github, which describes some physical properties of the robotic arm.<br />
<strong>How to generate a URDF file:</strong><br />
You need to use CAD software to create the model, ensuring that the 3D model you create matches the actual size of the robotic arm. Because the exported model needs to be in a format that ROS can accept, it is usually a DAE (Collada) file.<br />
<img src="https://community.robotshop.com/forum/uploads/default/optimized/3X/b/b/bb1e983714caf20cb9f849f5cb42a225cc70b966_2_132x230.png" alt="alt text" class=" img-fluid img-markdown" /><br />
<img src="https://community.robotshop.com/forum/uploads/default/original/3X/6/8/68d5ea40ef82c5aa920c0970dd514484d1e5a0c6.png" alt="alt text" class=" img-fluid img-markdown" /></p>
<h3>Load URDF to ROS server</h3>
<p dir="auto"><strong>Create a ROS package</strong><br />
To create a package to store the URDF file, execute the following code in the terminal:</p>
<pre><code>catkin_create_pkg mycobot_description rospy std_msgs sensor_msgs
</code></pre>
<p dir="auto">Here, ‘rospy’, ‘std_msgs’, and ‘sensor_msgs’ are dependency packages in ROS. Add the generated URDF file to the mycobot_description/urdf directory.</p>
<p dir="auto"><strong>Create a launch file</strong><br />
In the mycobot_description/launch directory, create a file named load_urdf.launch and add the following content to it.</p>
<pre><code>&lt;launch&gt;  &lt;param name="robot_description" command="$(find xacro)/xacro.py $(find mycobot_description)/urdf/mycobot.urdf.xacro" /&gt;&lt;/launch&gt;
</code></pre>
<p dir="auto"><strong>Lanuch ROS file</strong><br />
Open the terminal and enter the following command:</p>
<pre><code>roslaunch my_robot_description load_urdf.launch
</code></pre>
<p dir="auto">After running, we can see the model we created in RViz.<br />
<img src="https://community.robotshop.com/forum/uploads/default/original/3X/1/4/14382b10b51a7b9fc990a15de195dad00d0fb271.png" alt="alt text" class=" img-fluid img-markdown" /></p>
<h2>Configure MoveIt</h2>
<p dir="auto">Start MoveIt Setup Assistant:</p>
<p dir="auto">In a terminal, run the following command to start MoveIt Setup Assistant:</p>
<pre><code>roslaunch moveit_setup_assistant setup_assistant.launch
</code></pre>
<p dir="auto"><img src="https://community.robotshop.com/forum/uploads/default/original/3X/3/2/324920697c3a5020b017261cc3c11257f408f015.png" alt="alt text" class=" img-fluid img-markdown" /><br />
Load URDF in MoveIt In the main interface of MoveIt Setup Assistant, click the “Create New MoveIt Configuration Package” button. Then, in the pop-up dialogue box, select my URDF file (in the ROS parameter robot_description). Click the “Load Files” button to load your URDF file. If the image as shown below appears, it means the file has been loaded successfully.<br />
<img src="https://community.robotshop.com/forum/uploads/default/original/3X/a/d/ad048036c1f67ce310cf95cdbae162682e7bb202.png" alt="alt text" class=" img-fluid img-markdown" /></p>
<h2>Configure Robot</h2>
<p dir="auto">Configure your robot according to your needs. Here are a few configurations briefly introduced:</p>
<ul>
<li>Self-Collisions: MoveIt will automatically calculate the self-collision matrix of your robot. You can<br />
click the “Regenerate Default Collision Matrix” button to generate a default self-collision matrix.</li>
<li>Virtual Joints: You can define virtual joints for your robot. Virtual joints are often used to connect your robot to the world.</li>
<li>Planning Groups: You can define the planning groups of your robot. Planning groups are a set of joints and links that need to move together. For example, you can create a planning group that includes all arm joints.</li>
<li>Robot Poses: You can define preset poses for your robot. These preset poses can be reused in planning.</li>
</ul>
<p dir="auto">After the configuration is completed, a .launch file will be generated, and we can use the robotic arm in moveit by running it.<br />
<img src="https://community.robotshop.com/forum/uploads/default/original/3X/8/f/8f484e96903b260c9056f27f3d17f3294a721b22.png" alt="alt text" class=" img-fluid img-markdown" /><br />
Let’s see how the moveit function performs path planning. By dragging the coordinates of the end of the robotic arm, we can make changes to the movement of the robotic arm, and also randomly generate paths.<br />
<img src="https://community.robotshop.com/forum/uploads/default/original/3X/a/2/a2f8cb4d6678f2824942cc3b7a557c23cea30434.gif" alt="alt text" class=" img-fluid img-markdown" /><br />
<strong>Object obstacle avoidance</strong></p>
<p dir="auto">We add an obstacle in MoveIt for the robotic arm to avoid. We add a cube, so the robotic arm will navigate around it when it follows its trajectory.<br />
<img src="https://community.robotshop.com/forum/uploads/default/original/3X/a/5/a56b9dba256588bb104e91fca64e5e941bb6df94.gif" alt="alt text" class=" img-fluid img-markdown" /><br />
In addition to this, MoveIt has many features. For instance, motion planning, which can perform path planning for multi-joint robots, avoid collisions, and also use various motion planning algorithms like RRT, EST, LBKPIECE, etc. It also includes collision detection, which can carry out complex 3D collision detection, including self-collision and environmental collision (obstacles). Furthermore, it offers simulation capabilities, and can be integrated with the Gazebo robot simulator for realistic physical simulation, among other features.</p>
<h2>Summary</h2>
<p dir="auto">MoveIt provides a comprehensive set of tools for research into the motion planning and control of robotic arms, eliminating the need to build complex motion planning systems from scratch. This allows developers to focus more of their energy on implementing specific application functions, rather than on solving underlying motion planning and control problems. Overall, ROS and MoveIt provide strong support for the development of robotic technology. They not only reduce development difficulties and improve development efficiency, but also promote innovation and application in robotic technology. Whether you are a researcher in robotic technology or a developer in practical applications, ROS and MoveIt will be indispensable tools for you.</p>
<p dir="auto">In the future, I will continue to explore more robotic arm projects and share my findings and achievements from time to time. If you find any errors in this article, feel free to communicate with me. If you like this article, your likes and comments will be the greatest support for me!</p>
]]></description><link>https://community.m5stack.com/topic/5650/harnessing-the-power-of-ros-and-moveit-in-robotic-arm-manipulation</link><guid isPermaLink="true">https://community.m5stack.com/topic/5650/harnessing-the-power-of-ros-and-moveit-in-robotic-arm-manipulation</guid><dc:creator><![CDATA[Yuly12]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[&quot;A fatal error occurred: Timed out waiting for packet header&quot;]]></title><description><![CDATA[<p dir="auto">Have you got the correct port selected for the M5StickCPlus in arduino?</p>
]]></description><link>https://community.m5stack.com/topic/5324/a-fatal-error-occurred-timed-out-waiting-for-packet-header</link><guid isPermaLink="true">https://community.m5stack.com/topic/5324/a-fatal-error-occurred-timed-out-waiting-for-packet-header</guid><dc:creator><![CDATA[ajb2k3]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[Object Tracking on myCobot 280 Jetson Nano: A Case Study]]></title><description><![CDATA[<h2>Introduction</h2>
<p dir="auto">When we saw a video on YouTube of someone using a robotic arm to achieve object tracking, it deeply inspired us. We became very interested in this project and decided to independently develop a similar program.</p>
<p dir="auto"><img src="https://cdn.hackaday.io/images/original/1433151684201201514.gif" alt="alt text" class=" img-fluid img-markdown" /></p>
<h2>myCobot 280 M5Stack</h2>
<p dir="auto">The robotic arm used for the operation is the myCobot280 M5Stack. This is a small 6-axis robotic arm produced by Elephant Robotics, with M5Stack-Basic as the microprocessor, ESP32 as the auxiliary control, and a UR collaborative structure. The myCobot280 M5Stack-Basic has a body weight of 800g, a payload of 250g, a working radius of 280mm, and a compact and portable design. Despite its small size, it is powerful and easy to operate, capable of collaborating with humans and working safely.<br />
<img src="https://hackster.imgix.net/uploads/attachments/1568027/image_WbMNU21RuO.png?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /></p>
<h2>Process</h2>
<p dir="auto">The following image is a flowchart of the project development process.<br />
<img src="https://cdn.hackaday.io/images/6695771684202822381.jpg" alt="alt text" class=" img-fluid img-markdown" /></p>
<h3>Capture the target</h3>
<p dir="auto">Before beginning development, we conducted some research and experiments. We used a camera to capture images of objects and utilized the OpenCV library for recognition. We attempted various methods, but object recognition required machine learning for the target we wanted to identify, which would increase the project development time. Ultimately, we decided to use aruco codes for identification, which allowed us to quickly capture the aruco codes and proceed to the next stage of development.<br />
<img src="https://cdn.hackaday.io/images/4776861684202919859.jpg" alt="alt text" class=" img-fluid img-markdown" /><br />
Code：</p>
<pre><code>def show_video_v2(self):
        # self.robot.init_robot()
        xyz = np.array([0,0,0])
        LIST = []
        num_count = 0
        list_len = 5
        # cmax = [180, 80, 240]
        # cmin = [130, -80, 200]
        cmax = [150, -150, 300]
        cmin = [-150, -250, 200]

        while cv2.waitKey(1) &lt; 0:
            success, img = self.cap.read()
            if not success:
                print("It seems that the image cannot be acquired correctly.")
                break
            # transfrom the img to model of gray
            gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
            # Detect ArUco marker.
            corners, ids, rejectImaPoint = cv2.aruco.detectMarkers(
                gray, self.aruco_dict, parameters=self.aruco_params
            )

            if len(corners) &gt; 0:
                if ids is not None:
                    # get informations of aruco
                    ret = cv2.aruco.estimatePoseSingleMarkers(
                        # '''https://stackoverflow.com/questions/53303730/what-is-the-value-for-markerlength-in-aruco-estimateposesinglemarkers'''
                        corners, 0.025, self.camera_matrix, self.dist_coeffs
                    )
                    # rvec:rotation offset,tvec:translation deviator
                    (rvec, tvec) = (ret[0], ret[1])
                    
                    (rvec - tvec).any()
                    xyz = tvec[0, 0, :] * 1000
                    rpy = rvec[0,0,:]

                    camera = np.array([xyz[0], xyz[1], xyz[2&rsqb;&rsqb;)

                    if num_count &gt; list_len:
                        target = model_track(camera)
                        print("target", target)

                        for i in range(3):
                            if target[i] &gt; cmax[i]:
                                target[i] = cmax[i]
                            if target[i] &lt; cmin[i]:
                                target[i] = cmin[i]

                        pose = np.array([-103, 8.9, -164])
                        coord = np.concatenate((target.copy(), pose), axis=0)

                        # q1 = math.atan(xyz[0] / xyz[2])*180/np.pi
                        mc.send_coords(coord,50,0)
                        
                        
                        # print('target', coord)
                        num_count = 1
                    else:
                        num_count = num_count + 1
                    

                    for i in range(rvec.shape[0]):
                        # draw the aruco on img
                        cv2.aruco.drawDetectedMarkers(img, corners)
            cv2.imshow("show_video", img)
</code></pre>
<h2>Hand-eye calibration</h2>
<p dir="auto">Hand-eye calibration refers to the process of determining the position and orientation of the robot end effector (such as a mechanical arm) relative to the robot base coordinate system in the field of robotics. This process involves pairing the robot end effector with a camera and then determining its position and orientation in the robot base coordinate system by capturing its position and orientation in the camera's field of view.<br />
Hand-eye calibration typically involves a series of movements between the robot end effector and the camera to collect enough data to calculate the transformation matrix between them. This transformation matrix describes the position and orientation of the robot end effector relative to the camera, which can be used to control the robot's motion and accurately perform the required tasks.<br />
In "eye-to-hand" hand-eye calibration, the camera is considered a stationary observer ("eye"), while the robot end effector is considered a moving object in the camera's field of view ("hand"). As the robot end effector moves around the camera, a series of images are collected that contain information about the end effector's position and orientation at different locations and orientations. By analyzing these images, the position and orientation of the robot end effector relative to the camera can be calculated, completing the hand-eye calibration.<br />
The following is the code for processing the coordinate transformation data.</p>
<pre><code>#The function is used to calculate the similarity between cameras.
def calculate_similarity(camera):
    n = camera.shape[0]
    total_similarity = 0
    for i in range(n):
        for j in range(i+1, n):
            vector_a = camera[i]
            vector_b = camera[j]
            dot_product = np.dot(vector_a, vector_b)
            norm_a = np.linalg.norm(vector_a)
            norm_b = np.linalg.norm(vector_b)
            similarity = dot_product / (norm_a * norm_b)
            total_similarity += similarity
    return total_similarity/n
# The function is used to calculate the rate of change in similarity.
def similarity_change_rate(new_similarity):
    global prev_similarity
    if prev_similarity is None:
        prev_similarity = new_similarity
        return 0
    else:
        change_rate = (new_similarity - prev_similarity) / prev_similarity
        prev_similarity = new_similarity
        return change_rate

#The function is used to convert a rotation matrix to Euler angles.
def CvtRotationMatrixToEulerAngle(pdtRotationMatrix):
    pdtEulerAngle = np.zeros(3)

    pdtEulerAngle[2] = np.arctan2(pdtRotationMatrix[1, 0], pdtRotationMatrix[0, 0])

    fCosRoll = np.cos(pdtEulerAngle[2])
    fSinRoll = np.sin(pdtEulerAngle[2])

    pdtEulerAngle[1] = np.arctan2(-pdtRotationMatrix[2, 0], (fCosRoll * pdtRotationMatrix[0, 0]) + (fSinRoll * pdtRotationMatrix[1, 0]))
    pdtEulerAngle[0] = np.arctan2((fSinRoll * pdtRotationMatrix[0, 2]) - (fCosRoll * pdtRotationMatrix[1, 2]), (-fSinRoll * pdtRotationMatrix[0, 1]) + (fCosRoll * pdtRotationMatrix[1, 1]))

    return pdtEulerAngle
# The function is used to convert Euler angles to a rotation matrix.
def CvtEulerAngleToRotationMatrix(ptrEulerAngle):
    ptrSinAngle = np.sin(ptrEulerAngle)
    ptrCosAngle = np.cos(ptrEulerAngle)

    ptrRotationMatrix = np.zeros((3, 3))
    ptrRotationMatrix[0, 0] = ptrCosAngle[2] * ptrCosAngle[1]
    ptrRotationMatrix[0, 1] = ptrCosAngle[2] * ptrSinAngle[1] * ptrSinAngle[0] - ptrSinAngle[2] * ptrCosAngle[0]
    ptrRotationMatrix[0, 2] = ptrCosAngle[2] * ptrSinAngle[1] * ptrCosAngle[0] + ptrSinAngle[2] * ptrSinAngle[0]
    ptrRotationMatrix[1, 0] = ptrSinAngle[2] * ptrCosAngle[1]
    ptrRotationMatrix[1, 1] = ptrSinAngle[2] * ptrSinAngle[1] * ptrSinAngle[0] + ptrCosAngle[2] * ptrCosAngle[0]
    ptrRotationMatrix[1, 2] = ptrSinAngle[2] * ptrSinAngle[1] * ptrCosAngle[0] - ptrCosAngle[2] * ptrSinAngle[0]
    ptrRotationMatrix[2, 0] = -ptrSinAngle[1]
    ptrRotationMatrix[2, 1] = ptrCosAngle[1] * ptrSinAngle[0]
    ptrRotationMatrix[2, 2] = ptrCosAngle[1] * ptrCosAngle[0]

    return ptrRotationMatrix

</code></pre>
<h2>Robotic Arm Control</h2>
<p dir="auto">After this step, object detection and control of the mechanical arm follow, which involves converting the coordinates of the recognized object into motion commands for the mechanical arm. The pymycobot library is used to control the mechanical arm.</p>
<pre><code>#The function is used for visual tracking and calculating the target position.
def Visual_tracking280(coord, camera):
    pose_camera = camera[:3]
    angle_camear = camera[3:]
    r = CvtEulerAngleToRotationMatrix(angle_camear)
    # r = np.array(&lsqb;&lsqb;1, 0, 0],
    #                  [0, 1, 0],
    #                  [0, 0, 1&rsqb;&rsqb;)
    euler = np.radians(coord[3:])
    R = CvtEulerAngleToRotationMatrix(euler)
    offset = np.array([0, 0, -250])
    Roff = np.array(&lsqb;&lsqb;1, 0, 0],
                     [0, -1, 0],
                     [0, 0, -1&rsqb;&rsqb;)
    # Roff = np.array(&lsqb;&lsqb;1, 0, 0],
    #                  [0, 1, 0],
    #                  [0, 0, 1&rsqb;&rsqb;)
    vector = pose_camera + offset
    # print("R", R)
    # print("r", r)

    move_pos = np.dot(np.dot(R, r), Roff).dot(vector)
    pos = coord[:3] + move_pos
    # angle = np.array(CvtRotationMatrixToEulerAngle(np.dot(np.dot(R, r), Roff))) * 180/np.pi
    angle =  coord[3:]
    target = np.concatenate((pos, angle))
    return target
    
    #Calculate the target position based on the camera coordinates.
def model_track(camera):
    model_pos = np.array([-camera[0], -camera[2], -camera[1&rsqb;&rsqb;)
    camera_pos = np.array([-37.5, 416.6, 322.9])
    target_pos = model_pos + camera_pos
    # print("model_pos", model_pos)
    # print("target_pos", target_pos)
    return target_pos

</code></pre>
<p dir="auto">Finally, let's summarize the logical relationship of the project.<br />
<img src="https://cdn.hackaday.io/images/4871371684203022760.jpg" alt="alt text" class=" img-fluid img-markdown" /><br />
Let's take a look at how it performs.<br />
<img src="https://cdn.hackaday.io/images/original/6687121684203051230.gif" alt="alt text" class=" img-fluid img-markdown" /><br />
It may be noticed that sometimes myCobot does not move. This is because its body was blocking the camera, preventing it from capturing the target object. When moving objects, it is important to ensure that the mechanical arm body does not block the camera.</p>
<p dir="auto">This code is applicable to the entire myCobot280 series, including Pi, Jetson Nano, and other versions. The parameters may need to be adjusted based on the specific version being used.</p>
<p dir="auto">People often compare the Jetson Nano with the Raspberry Pi. I have tried this program on two different robotic arms, and it is evident that the Jetson Nano version is much more responsive than the Raspberry Pi, owing to its superior computational power.There is a noticeable delay of approximately one second between them as observed by the naked eye.</p>
<h2>Summary</h2>
<p dir="auto">During the debugging process, we found that the tracking effect was not very smooth and responsive. We adjusted the smoothness by controlling the detection cycle, but it was necessary to slowly move the tracked object to achieve better results. There are still some shortcomings, as the body of the mechanical arm may block the camera's field of view when the camera is fixed, making it impossible to proceed with the next tracking step. One solution we thought of is to move the camera to a position where it is not blocked (which would require recalculating the coordinates). If you have any better ideas, please feel free to communicate with us! Thank you for your patience.</p>
]]></description><link>https://community.m5stack.com/topic/5302/object-tracking-on-mycobot-280-jetson-nano-a-case-study</link><guid isPermaLink="true">https://community.m5stack.com/topic/5302/object-tracking-on-mycobot-280-jetson-nano-a-case-study</guid><dc:creator><![CDATA[ElephantRobotics]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[Unable to load new FW from Arduino IDE.]]></title><description><![CDATA[<p dir="auto"><a class="mention plugin-mentions-user plugin-mentions-a" href="https://community.m5stack.com/uid/7508">@teastain</a> This is en backtrace decoded:</p>
<p dir="auto">PC: 0x04d25826<br />
EXCVADDR: 0x04d25824</p>
<p dir="auto">Decoding stack results<br />
0x40090d71: prvIdleTask at /home/runner/work/esp32-arduino-lib-builder/esp32-arduino-lib-builder/esp-idf/components/freertos/tasks.c line 3382<br />
0x4008f5b6: vPortTaskWrapper at /home/runner/work/esp32-arduino-lib-builder/esp32-arduino-lib-builder/esp-idf/components/freertos/port.c line 143</p>
]]></description><link>https://community.m5stack.com/topic/5160/unable-to-load-new-fw-from-arduino-ide</link><guid isPermaLink="true">https://community.m5stack.com/topic/5160/unable-to-load-new-fw-from-arduino-ide</guid><dc:creator><![CDATA[fbritop]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[M5Stack.h: No such file or directory with M5StickCPlus?]]></title><description><![CDATA[<p dir="auto"><a class="mention plugin-mentions-user plugin-mentions-a" href="https://community.m5stack.com/uid/18806">@wsanders</a> At least with respect to the LCD library: The only function implemented so far to manipulate whole images is drawBitmap. drawJpg and drawJpgFile are commented out in <a href="https://github.com/m5stack/M5StickC-Plus/blob/master/src/M5Display.h" target="_blank" rel="noopener noreferrer nofollow ugc">https://github.com/m5stack/M5StickC-Plus/blob/master/src/M5Display.h</a> so I assume they aren't working yet.</p>
<p dir="auto">So my workflow for now is:</p>

Save the image from gimp  in gimp's ".h" format. This is smaller than a xpm or bmp. You will get a static char *data structure of all the pixels in the image. The .h file includes a macro to extract the pixels:

<p dir="auto">#define HEADER_PIXEL(data,pixel) {<br />
pixel[0] = (((data[0] - 33) &lt;&lt; 2) | ((data[1] - 33) &gt;&gt; 4)); <br />
pixel[1] = ((((data[1] - 33) &amp; 0xF) &lt;&lt; 4) | ((data[2] - 33) &gt;&gt; 2)); <br />
pixel[2] = ((((data[2] - 33) &amp; 0x3) &lt;&lt; 6) | ((data[3] - 33))); <br />
data += 4; <br />
}</p>


<p dir="auto">Write your own function rgb888to565 to compress the pixels into a uint16_t.</p>


<p dir="auto">Draw a bitmap of the image as fast as you can:</p>


<p dir="auto">#include &lt;M5StickCPlus.h&gt;<br />
#include "1.h"<br />
int pixel[3];<br />
// pointer fu to preserve the start of .h data<br />
char *datastart;<br />
uint16_t *bitmap;</p>
<p dir="auto">void setup() {<br />
M5.begin();<br />
M5.Lcd.setRotation(3);<br />
bitmap = (uint16_t *)malloc(height * width * 2);<br />
}</p>
<p dir="auto">void loop() {<br />
M5.Lcd.fillScreen(GREEN);<br />
datastart = data;<br />
for (int16_t y=0; y &lt; height; y++) {<br />
for (int16_t x=0; x &lt; width; x++) {<br />
HEADER_PIXEL(data, pixel);<br />
bitmap[60*y + x] = rgb888to565(pixel[0], pixel[1], pixel[2]);<br />
}<br />
}<br />
M5.Lcd.drawBitmap(0,0,width,height,bitmap);<br />
data = datastart;<br />
}</p>
<p dir="auto">Or you can use the Sprite library, which works well.</p>
]]></description><link>https://community.m5stack.com/topic/5045/m5stack-h-no-such-file-or-directory-with-m5stickcplus</link><guid isPermaLink="true">https://community.m5stack.com/topic/5045/m5stack-h-no-such-file-or-directory-with-m5stickcplus</guid><dc:creator><![CDATA[wsanders]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[M5Paper: fast update to display keyboard entries]]></title><description><![CDATA[<p dir="auto">Hi all,<br />
I have designed a keyboard on the M5Paper to type in letters and to display the content on the ePaper display. Currently I use "canvas.pushCanvas(0,0,UPDATE_MODE_DU4);" which seems to me to be the fastest update mode.</p>
<p dir="auto">However when I type in the letters, I still have to wait "too long" until the display is updated.</p>
<p dir="auto">What can you recommend being the fastest way to display the new letters? Is there a possibility to partially update the display? I tried "M5.EPD.UpdateArea(0,60,540,400,UPDATE_MODE_GL16);" but nothing changes.</p>
<p dir="auto">Thanks.</p>
<p dir="auto">DiWa</p>
<p dir="auto">CODE (needful part of it):</p>
<p dir="auto">void Tastatur(){<br />
canvas.fillCanvas(0);<br />
Header();<br />
canvas.setTextArea(10, 60, 520, 460);<br />
canvas.setTextSize(48);<br />
canvas.print(Eingabe);</p>
<p dir="auto">//Raster<br />
canvas.setTextSize(32);<br />
canvas.drawRoundRect( 10, 430, 168, 80, 10, 15);<br />
canvas.drawRoundRect(186, 430, 168, 80, 10, 15);<br />
canvas.drawRoundRect(362, 430, 168, 80, 10, 15);<br />
canvas.fillRoundRect( 11, 431, 166, 78, 9, 2);<br />
canvas.fillRoundRect(187, 431, 166, 78, 9, 2);<br />
canvas.fillRoundRect(363, 431, 166, 78, 9, 2);<br />
canvas.drawString("wiki", 65, 455);<br />
canvas.drawString("wo", 250, 455);<br />
canvas.drawString("123", 415, 455);</p>
<p dir="auto"><a href="//canvas.createRender" target="_blank" rel="noopener noreferrer nofollow ugc">//canvas.createRender</a>(48, 256);<br />
canvas.setTextSize(48);<br />
canvas.drawRoundRect( 10, 518, 80, 80, 10, 15); canvas.drawString("a",  35, 538);<br />
canvas.drawRoundRect( 98, 518, 80, 80, 10, 15); canvas.drawString("b", 123, 538);<br />
canvas.drawRoundRect(186, 518, 80, 80, 10, 15); canvas.drawString("c", 211, 538);<br />
canvas.drawRoundRect(274, 518, 80, 80, 10, 15); canvas.drawString("d", 299, 538);<br />
canvas.drawRoundRect(362, 518, 80, 80, 10, 15); canvas.drawString("e", 387, 538);<br />
canvas.drawRoundRect(450, 518, 80, 80, 10, 15); canvas.drawString("f", 475, 538);<br />
canvas.drawRoundRect( 10, 606, 80, 80, 10, 15); canvas.drawString("g", 35, 626);<br />
canvas.drawRoundRect( 98, 606, 80, 80, 10, 15); canvas.drawString("h", 123, 626);<br />
canvas.drawRoundRect(186, 606, 80, 80, 10, 15); canvas.drawString("i", 216, 626);<br />
canvas.drawRoundRect(274, 606, 80, 80, 10, 15); canvas.drawString("j", 305, 626);<br />
canvas.drawRoundRect(362, 606, 80, 80, 10, 15); canvas.drawString("k", 387, 626);<br />
canvas.drawRoundRect(450, 606, 80, 80, 10, 15); canvas.drawString("l", 475, 626);<br />
canvas.drawRoundRect( 10, 694, 80, 80, 10, 15); canvas.drawString("m", 33, 714);<br />
canvas.drawRoundRect( 98, 694, 80, 80, 10, 15); canvas.drawString("n", 123, 714);<br />
canvas.drawRoundRect(186, 694, 80, 80, 10, 15); canvas.drawString("o", 211, 714);<br />
canvas.drawRoundRect(274, 694, 80, 80, 10, 15); canvas.drawString("p", 299, 714);<br />
canvas.drawRoundRect(362, 694, 80, 80, 10, 15); canvas.drawString("q", 387, 714);<br />
canvas.drawRoundRect(450, 694, 80, 80, 10, 15); canvas.drawString("r", 475, 714);<br />
canvas.drawRoundRect( 10, 782, 80, 80, 10, 15); canvas.drawString("s",  35, 802);<br />
canvas.drawRoundRect( 98, 782, 80, 80, 10, 15); canvas.drawString("t", 123, 802);<br />
canvas.drawRoundRect(186, 782, 80, 80, 10, 15); canvas.drawString("u", 211, 802);<br />
canvas.drawRoundRect(274, 782, 80, 80, 10, 15); canvas.drawString("v", 299, 802);<br />
canvas.drawRoundRect(362, 782, 80, 80, 10, 15); canvas.drawString("w", 387, 802);<br />
canvas.drawRoundRect(450, 782, 80, 80, 10, 15); canvas.drawString("x", 475, 802);<br />
canvas.drawRoundRect( 10, 870, 80, 80, 10, 15); canvas.drawString("y",  35, 890);<br />
canvas.drawRoundRect( 98, 870, 80, 80, 10, 15); canvas.drawString("z", 123, 890);<br />
canvas.drawRoundRect(186, 870, 80, 80, 10, 15); canvas.drawString("?", 211, 890);</p>
<p dir="auto">canvas.drawRoundRect(275, 870, 80, 80, 10, 15); canvas.fillRoundRect(276, 871, 78, 78, 9, 2); canvas.drawString("_", 305, 890);<br />
canvas.drawRoundRect(362, 870, 80, 80, 10, 15); canvas.fillRoundRect(363, 871, 78, 78, 9, 2); canvas.drawString("&lt;-", 375, 890);<br />
canvas.drawRoundRect(450, 870, 80, 80, 10, 15); canvas.fillRoundRect(451, 871, 78, 78, 9, 2); canvas.drawString("OK", 455, 890);</p>
<p dir="auto">canvas.pushCanvas(0,0,UPDATE_MODE_GC16);</p>
<p dir="auto">int touch = 0;<br />
int Auswahl = 0;<br />
int Leer_Stelle = 0;<br />
int Zeichen = 0;</p>
<p dir="auto">while(Auswahl == 0){<br />
if(M5.TP.avaliable()){<br />
if(!M5.TP.isFingerUp()){<br />
M5.TP.update();<br />
bool is_update = false;<br />
for(int i=0;i&lt;2; i++){<br />
tp_finger_t FingerItem = M5.TP.readFinger(i);<br />
if((point[i][0]!=FingerItem.x)||(point[i][1]!=FingerItem.y)){<br />
is_update = true;<br />
point[i][0] = FingerItem.x;<br />
point[i][1] = FingerItem.y;<br />
if (FingerItem.x &gt;= 430 &amp;&amp; FingerItem.x &lt; 510 &amp;&amp; FingerItem.y &gt;= 360 &amp;&amp; FingerItem.y &lt;= 530) {<br />
Serial.println("wiki");<br />
canvas.setTextSize(32);<br />
Eingabe = "Schreib einen wiki\nArtikel über\n";<br />
canvas.pushCanvas(0,0,UPDATE_MODE_DU4);<br />
}<br />
if (FingerItem.x &gt;= 430 &amp;&amp; FingerItem.x &lt; 510 &amp;&amp; FingerItem.y &gt;= 274 &amp;&amp; FingerItem.y &lt;= 352) {<br />
Serial.println("wo");<br />
canvas.setTextSize(32);<br />
Eingabe = "wo finde ich ";<br />
canvas.pushCanvas(0,0,UPDATE_MODE_GC16);<br />
}<br />
if (FingerItem.x &gt;= 430 &amp;&amp; FingerItem.x &lt; 510 &amp;&amp; FingerItem.y &gt;=  98 &amp;&amp; FingerItem.y &lt;= 178) {<br />
Serial.println("123");<br />
Sonderzeichen();<br />
}<br />
if (FingerItem.x &gt;= 518 &amp;&amp; FingerItem.x &lt; 598 &amp;&amp; FingerItem.y &gt;= 450 &amp;&amp; FingerItem.y &lt;= 530) Eingabe = Eingabe + "a"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("a");<br />
if (FingerItem.x &gt;= 518 &amp;&amp; FingerItem.x &lt; 598 &amp;&amp; FingerItem.y &gt;= 362 &amp;&amp; FingerItem.y &lt;= 442) Eingabe = Eingabe + "b"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("b");<br />
if (FingerItem.x &gt;= 518 &amp;&amp; FingerItem.x &lt; 598 &amp;&amp; FingerItem.y &gt;= 274 &amp;&amp; FingerItem.y &lt;= 352) Eingabe = Eingabe + "c"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("c");<br />
if (FingerItem.x &gt;= 518 &amp;&amp; FingerItem.x &lt; 598 &amp;&amp; FingerItem.y &gt;= 186 &amp;&amp; FingerItem.y &lt;= 266) Eingabe = Eingabe + "d"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("d");<br />
if (FingerItem.x &gt;= 518 &amp;&amp; FingerItem.x &lt; 598 &amp;&amp; FingerItem.y &gt;=  98 &amp;&amp; FingerItem.y &lt;= 178) Eingabe = Eingabe + "e"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("e");<br />
if (FingerItem.x &gt;= 518 &amp;&amp; FingerItem.x &lt; 598 &amp;&amp; FingerItem.y &gt;=  10 &amp;&amp; FingerItem.y &lt;=  90) Eingabe = Eingabe + "f"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("f");<br />
if (FingerItem.x &gt;= 606 &amp;&amp; FingerItem.x &lt; 686 &amp;&amp; FingerItem.y &gt;= 450 &amp;&amp; FingerItem.y &lt;= 530) Eingabe = Eingabe + "g"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("g");<br />
if (FingerItem.x &gt;= 606 &amp;&amp; FingerItem.x &lt; 686 &amp;&amp; FingerItem.y &gt;= 362 &amp;&amp; FingerItem.y &lt;= 442) Eingabe = Eingabe + "h"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("h");<br />
if (FingerItem.x &gt;= 606 &amp;&amp; FingerItem.x &lt; 686 &amp;&amp; FingerItem.y &gt;= 274 &amp;&amp; FingerItem.y &lt;= 352) Eingabe = Eingabe + "i"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("i");<br />
if (FingerItem.x &gt;= 606 &amp;&amp; FingerItem.x &lt; 686 &amp;&amp; FingerItem.y &gt;= 186 &amp;&amp; FingerItem.y &lt;= 266) Eingabe = Eingabe + "j"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("j");<br />
if (FingerItem.x &gt;= 606 &amp;&amp; FingerItem.x &lt; 686 &amp;&amp; FingerItem.y &gt;=  98 &amp;&amp; FingerItem.y &lt;= 178) Eingabe = Eingabe + "k"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("k");<br />
if (FingerItem.x &gt;= 606 &amp;&amp; FingerItem.x &lt; 686 &amp;&amp; FingerItem.y &gt;=  10 &amp;&amp; FingerItem.y &lt;=  90) Eingabe = Eingabe + "l"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("l");<br />
if (FingerItem.x &gt;= 694 &amp;&amp; FingerItem.x &lt; 774 &amp;&amp; FingerItem.y &gt;= 450 &amp;&amp; FingerItem.y &lt;= 530) Eingabe = Eingabe + "m"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("m");<br />
if (FingerItem.x &gt;= 694 &amp;&amp; FingerItem.x &lt; 774 &amp;&amp; FingerItem.y &gt;= 362 &amp;&amp; FingerItem.y &lt;= 442) Eingabe = Eingabe + "n"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("n");<br />
if (FingerItem.x &gt;= 694 &amp;&amp; FingerItem.x &lt; 774 &amp;&amp; FingerItem.y &gt;= 274 &amp;&amp; FingerItem.y &lt;= 352) Eingabe = Eingabe + "o"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("o");<br />
if (FingerItem.x &gt;= 694 &amp;&amp; FingerItem.x &lt; 774 &amp;&amp; FingerItem.y &gt;= 186 &amp;&amp; FingerItem.y &lt;= 266) Eingabe = Eingabe + "p"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("p");<br />
if (FingerItem.x &gt;= 694 &amp;&amp; FingerItem.x &lt; 774 &amp;&amp; FingerItem.y &gt;=  98 &amp;&amp; FingerItem.y &lt;= 178) Eingabe = Eingabe + "q"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("q");<br />
if (FingerItem.x &gt;= 694 &amp;&amp; FingerItem.x &lt; 774 &amp;&amp; FingerItem.y &gt;=  10 &amp;&amp; FingerItem.y &lt;=  90) Eingabe = Eingabe + "r"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("r");<br />
if (FingerItem.x &gt;= 782 &amp;&amp; FingerItem.x &lt; 862 &amp;&amp; FingerItem.y &gt;= 450 &amp;&amp; FingerItem.y &lt;= 530) Eingabe = Eingabe + "s"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("s");<br />
if (FingerItem.x &gt;= 782 &amp;&amp; FingerItem.x &lt; 862 &amp;&amp; FingerItem.y &gt;= 362 &amp;&amp; FingerItem.y &lt;= 442) Eingabe = Eingabe + "t"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("t");<br />
if (FingerItem.x &gt;= 782 &amp;&amp; FingerItem.x &lt; 862 &amp;&amp; FingerItem.y &gt;= 274 &amp;&amp; FingerItem.y &lt;= 352) Eingabe = Eingabe + "u"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("u");<br />
if (FingerItem.x &gt;= 782 &amp;&amp; FingerItem.x &lt; 862 &amp;&amp; FingerItem.y &gt;= 186 &amp;&amp; FingerItem.y &lt;= 266) Eingabe = Eingabe + "v"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("v");<br />
if (FingerItem.x &gt;= 782 &amp;&amp; FingerItem.x &lt; 862 &amp;&amp; FingerItem.y &gt;=  98 &amp;&amp; FingerItem.y &lt;= 178) Eingabe = Eingabe + "w"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("w");<br />
if (FingerItem.x &gt;= 782 &amp;&amp; FingerItem.x &lt; 862 &amp;&amp; FingerItem.y &gt;=  10 &amp;&amp; FingerItem.y &lt;=  90) Eingabe = Eingabe + "x"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("x");<br />
if (FingerItem.x &gt;= 870 &amp;&amp; FingerItem.x &lt; 950 &amp;&amp; FingerItem.y &gt;= 450 &amp;&amp; FingerItem.y &lt;= 530) Eingabe = Eingabe + "y"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("y");<br />
if (FingerItem.x &gt;= 870 &amp;&amp; FingerItem.x &lt; 950 &amp;&amp; FingerItem.y &gt;= 362 &amp;&amp; FingerItem.y &lt;= 442) Eingabe = Eingabe + "z"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("z");<br />
if (FingerItem.x &gt;= 870 &amp;&amp; FingerItem.x &lt; 950 &amp;&amp; FingerItem.y &gt;= 274 &amp;&amp; FingerItem.y &lt;= 352) Eingabe = Eingabe + "?"; <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("z");<br />
if (FingerItem.x &gt;= 870 &amp;&amp; FingerItem.x &lt; 950 &amp;&amp; FingerItem.y &gt;= 186 &amp;&amp; FingerItem.y &lt;= 266) { <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("Leer");<br />
Eingabe = Eingabe + " ";<br />
Leer_Stelle = Eingabe.length() - 1;<br />
}<br />
if (FingerItem.x &gt;= 870 &amp;&amp; FingerItem.x &lt; 950 &amp;&amp; FingerItem.y &gt;=  98 &amp;&amp; FingerItem.y &lt;= 178) { <a href="//Serial.print" target="_blank" rel="noopener noreferrer nofollow ugc">//Serial.print</a>("&lt;-");<br />
Eingabe_len = Eingabe.length();<br />
if (Eingabe_len &gt;= 1) Eingabe = Eingabe.substring(0, Eingabe_len - 1);<br />
canvas.fillRect(0, 52, 540, 360, 0);<br />
}<br />
if (FingerItem.x &gt;= 870 &amp;&amp; FingerItem.x &lt; 950 &amp;&amp; FingerItem.y &gt;=  10 &amp;&amp; FingerItem.y &lt;=  90) {<br />
Auswahl = 1;<br />
Serial.println("OK"); Serial.println();<br />
}<br />
int Zeilenlaenge = 20;<br />
canvas.setTextSize(48);<br />
canvas.setTextArea(10, 60, 520, 460);<br />
Zeichen = Zeichen + 1;<br />
if (Zeichen &gt; Zeilenlaenge){<br />
Eingabe.setCharAt(Leer_Stelle, '\n');<br />
canvas.fillRect(0, 60, 540, 360, 0);<br />
Zeichen = Eingabe.length() - Leer_Stelle;<br />
}<br />
canvas.print(Eingabe);<br />
canvas.pushCanvas(0,0,UPDATE_MODE_DU4);<br />
}<br />
}<br />
}<br />
}<br />
}<br />
Eingabe.replace('\n', ' ');<br />
Serial.println(Eingabe);<br />
monitorWiFi();<br />
canvas.setTextArea(10, 60, 520, 460);<br />
canvas.drawRect(100, 300, 330, 80, 15); canvas.fillRect(101, 301, 328, 78, 2);<br />
canvas.drawString("processing ...", 110, 320);<br />
canvas.pushCanvas(0,0,UPDATE_MODE_GL16);<br />
Serial.println(openAI_text(Eingabe));<br />
}</p>
]]></description><link>https://community.m5stack.com/topic/4930/m5paper-fast-update-to-display-keyboard-entries</link><guid isPermaLink="true">https://community.m5stack.com/topic/4930/m5paper-fast-update-to-display-keyboard-entries</guid><dc:creator><![CDATA[DiWa]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[Unit cam DIY kit and Esphome]]></title><description><![CDATA[<p dir="auto">Hi Win,</p>
<p dir="auto">My is not working too.</p>
<p dir="auto">Pinout is the same as stated on <a href="https://docs.m5stack.com/en/unit/unit_cam" target="_blank" rel="noopener noreferrer nofollow ugc">https://docs.m5stack.com/en/unit/unit_cam</a></p>
<p dir="auto">But in Homeassistant ESPHome generates these errors:</p>
<p dir="auto">[12:03:45][C][ledc.output:180]: LEDC Output:<br />
[12:03:45][C][ledc.output:181]:   Pin GPIO2<br />
[12:03:45][C][ledc.output:182]:   LEDC Channel: 2<br />
[12:03:45][C][ledc.output:183]:   PWM Frequency: 1000.0 Hz<br />
[12:03:45][C][ledc.output:184]:   Phase angle: 0.0°<br />
[12:03:45][C][ledc.output:185]:   Bit depth: 16<br />
[12:03:45][C][esp32_camera:048]: ESP32 Camera:<br />
[12:03:45][C][esp32_camera:049]:   Name: Timer-CAM1<br />
[12:03:45][C][esp32_camera:050]:   Internal: YES<br />
[12:03:45][C][esp32_camera:052]:   Data Pins: D0:32 D1:35 D2:34 D3:5 D4:39 D5:18 D6:36 D7:19<br />
[12:03:45][C][esp32_camera:053]:   VSYNC Pin: 22<br />
[12:03:45][C][esp32_camera:054]:   HREF Pin: 26<br />
[12:03:45][C][esp32_camera:055]:   Pixel Clock Pin: 21<br />
[12:03:45][C][esp32_camera:056]:   External Clock: Pin:27 Frequency:20000000<br />
[12:03:45][C][esp32_camera:060]:   I2C Pins: SDA:25 SCL:23<br />
[12:03:45][C][esp32_camera:062]:   Reset Pin: 15<br />
[12:03:45][C][esp32_camera:080]:   Resolution: 640x480 (VGA)<br />
[12:03:45][E][esp32_camera:123]:   Setup Failed: ESP_ERR_NOT_FOUND<br />
[12:03:45][E][component:082]:   Component esp32_camera is marked FAILED</p>
]]></description><link>https://community.m5stack.com/topic/4768/unit-cam-diy-kit-and-esphome</link><guid isPermaLink="true">https://community.m5stack.com/topic/4768/unit-cam-diy-kit-and-esphome</guid><dc:creator><![CDATA[SuperObob]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[Stange interaction between clock and temprature sensor setup]]></title><description><![CDATA[<p dir="auto">Hi <a class="mention plugin-mentions-user plugin-mentions-a" href="https://community.m5stack.com/uid/4037">@felmue</a> and thanks for the help. I changed GPIO and it works now. Strange as input port 2 worked when the temperature sensor were implemented separably.</p>
<p dir="auto">However I have spend enough time trying to make it work. So I will not conplain now that it is working.</p>
<p dir="auto">Thanks again</p>
]]></description><link>https://community.m5stack.com/topic/4574/stange-interaction-between-clock-and-temprature-sensor-setup</link><guid isPermaLink="true">https://community.m5stack.com/topic/4574/stange-interaction-between-clock-and-temprature-sensor-setup</guid><dc:creator><![CDATA[EricR]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[AtomMotion + PS4 Gamepad]]></title><description><![CDATA[<p dir="auto">My board was defect, I changed the MotionBase , now it is working perfectly.</p>
]]></description><link>https://community.m5stack.com/topic/4408/atommotion-ps4-gamepad</link><guid isPermaLink="true">https://community.m5stack.com/topic/4408/atommotion-ps4-gamepad</guid><dc:creator><![CDATA[Cakedrinker]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[Do you know Bluetooth Sample Code. a2dp+spp]]></title><description><![CDATA[<p dir="auto">Hey, everyone.</p>
<p dir="auto">I am looking for a sample in arduino-esp32.<br />
Bluetooth profiles have a2dp and spp. I would like to run this a2dp and spp at the same time. Have you seen any sample code anywhere that achieves this functionality?<br />
If anyone knows of any, could you please let me know?<br />
As for my specific story, I am running the ESP32-A2DP library and BluetoothSerial at the same time and using two M5 ATOM Lite devices to synchronize GIF animation and MP3 background music to be played back synchronously.</p>
<p dir="auto">Thank you in advance.</p>
]]></description><link>https://community.m5stack.com/topic/4297/do-you-know-bluetooth-sample-code-a2dp-spp</link><guid isPermaLink="true">https://community.m5stack.com/topic/4297/do-you-know-bluetooth-sample-code-a2dp-spp</guid><dc:creator><![CDATA[riraosan]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[Stamp Pico USB Serial]]></title><description><![CDATA[<p dir="auto"><a class="mention plugin-mentions-user plugin-mentions-a" href="https://community.m5stack.com/uid/4037">@felmue</a> said in <a href="/post/15611">Stamp Pico USB Serial</a>:</p>
<blockquote>
<p dir="auto">Or in other words, there is no way to directly connect D- and D+ from the USB-C breakout to the M5PicoStamp.</p>
</blockquote>
<p dir="auto">Hi Felix,</p>
<p dir="auto">thanks alot for that information.<br />
Was hoping this converter could be easy be done by some ICs and restistors maybe but would be too difficult for me yet.<br />
So i think i will get another controller with built in or a cojntroller with onboard USBC connection.<br />
Makes no sense to spend much time and money if the device will be procudes 100 times each with MCU and a serial converter.</p>
<p dir="auto">Your help was great!!!</p>
<p dir="auto">Best regards<br />
Hannes</p>
]]></description><link>https://community.m5stack.com/topic/3772/stamp-pico-usb-serial</link><guid isPermaLink="true">https://community.m5stack.com/topic/3772/stamp-pico-usb-serial</guid><dc:creator><![CDATA[HannesKannes]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[Loss of the program on Arduino]]></title><description><![CDATA[<p dir="auto"><a class="mention plugin-mentions-user plugin-mentions-a" href="https://community.m5stack.com/uid/1">@m5stack</a> How can I get bin file from the device?</p>
]]></description><link>https://community.m5stack.com/topic/3256/loss-of-the-program-on-arduino</link><guid isPermaLink="true">https://community.m5stack.com/topic/3256/loss-of-the-program-on-arduino</guid><dc:creator><![CDATA[What_is]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[How to play and store audio file from Google translate url?]]></title><description><![CDATA[<p dir="auto">Hi, I'm trying to play the audio on Atom Smart Speaker Development Kit, for example: play the audio return from <a href="https://translate.google.com/translate_tts?ie=UTF-8&amp;q=Hello%2C+World%21&amp;tl=en&amp;client=tw-ob&amp;ttsspeed=1" target="_blank" rel="noopener noreferrer nofollow ugc">https://translate.google.com/translate_tts?ie=UTF-8&amp;q=Hello%2C+World!&amp;tl=en&amp;client=tw-ob&amp;ttsspeed=1</a> and and be able to store it for playback later?</p>
]]></description><link>https://community.m5stack.com/topic/2930/how-to-play-and-store-audio-file-from-google-translate-url</link><guid isPermaLink="true">https://community.m5stack.com/topic/2930/how-to-play-and-store-audio-file-from-google-translate-url</guid><dc:creator><![CDATA[makadodo]]></dc:creator><pubDate>Invalid Date</pubDate></item></channel></rss>