<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Topics tagged with robotics]]></title><description><![CDATA[A list of topics that have been tagged with robotics]]></description><link>https://community.m5stack.com/tags/robotics</link><generator>RSS for Node</generator><lastBuildDate>Wed, 11 Mar 2026 13:46:12 GMT</lastBuildDate><atom:link href="https://community.m5stack.com/tags/robotics.rss" rel="self" type="application/rss+xml"/><pubDate>Invalid Date</pubDate><ttl>60</ttl><item><title><![CDATA[AI-Enhanced Robotics, GPT-4V and myCobot in Action]]></title><description><![CDATA[<p dir="auto"><em>This article on the mylangrobot project was created by neka-nat, and the author Shirokuma has authorized its editing and reproduction. A big shout out at neka-nat:)<br />
<a href="https://twitter.com/neka_nat" target="_blank" rel="noopener noreferrer nofollow ugc">https://twitter.com/neka_nat</a></em></p>
<p dir="auto"><img src="https://hackster.imgix.net/uploads/attachments/1671536/image_6gzeG2cNR1.png?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /></p>
<h2>Introduction</h2>
<p dir="auto">This project demonstrates a simple pick-and-place operation using a robotic arm, employing GPT-4V and myCobot. The demonstration uses a method named SoM (Object Detection Method) to generate robotic movements through natural language. In simpler terms, the machine receives natural language instructions, locates the target, and then commands the robotic arm to perform the grabbing task. The highlight of this project is the combination of GPT-4V's image processing and the SoM object detection algorithm, enabling the robotic arm to move through natural language interaction with the machine.</p>
<h2>Software</h2>
<h3>SoM</h3>
<p dir="auto">Set of Mark (SoM) is a method designed to enhance the visual understanding capabilities of large language models. After an image is processed with SoM, a series of markers are added to it. These markers can be recognized and processed by language models, aiding in more accurate identification and understanding of objects and content within the image. This enhancement allows language models to conduct more precise analysis and descriptions of elements in images, thereby improving their performance in visual tasks.<br />
<img src="https://hackster.imgix.net/uploads/attachments/1671537/_bwKXYoSNej.blob?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /></p>
<h3>GPT-4V</h3>
<p dir="auto">The commonly discussed GPT is a large predictive model with which we can have conversational interactions. The new iteration, GPT-4V, is a multimodal large language model that not only processes textual information but now also handles image information. It is capable of image understanding, image generation, and image description. This large model combines the powerful natural language processing abilities of GPT-4 with current image analysis technologies, offering more efficient and accurate integrated capabilities in vision and language.<br />
Below is <a href="https://cookbook.openai.com/examples/gpt_with_vision_for_video_understanding" target="_blank" rel="noopener noreferrer nofollow ugc">an example provided by OpenAI:</a></p>
<pre><code>User:
"These are frames from a video that I want to upload. Generate a compelling description that I can upload along with the video."
</code></pre>
<p dir="auto"><img src="https://hackster.imgix.net/uploads/attachments/1671538/image_ImW1wSoFMn.png?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /></p>
<pre><code>#GPT-4V Image description processing
GPT4:
"🐺 Survival of the Fittest: An Epic Tale in the Snow ❄️ - Witness the intense drama of nature as a pack of wolves face off against mighty bison in a harsh winter landscape. This raw footage captures the essence of the wild where every creature fights for survival. With each frame, experience the tension, the strategy, and the sheer force exerted in this life-or-death struggle. See nature's true colors in this gripping encounter on the snowy plains. 🦬"

Remember to respect wildlife and nature. This video may contain scenes that some viewers might find intense or distressing, but they depict natural animal behaviors important for ecological studies and understanding the reality of life in the wilderness.
</code></pre>
<h3>Openai-whisper</h3>
<p dir="auto"><a href="https://openai.com/research/whisper" target="_blank" rel="noopener noreferrer nofollow ugc">https://openai.com/research/whisper</a><br />
An automatic speech recognition (ASR) system developed by OpenAI. It utilizes advanced deep learning techniques to transcribe, translate, and understand human speech. Whisper is characterized by its ability to support multiple languages and its exceptional performance in recognizing voices with various accents and in noisy environments. Additionally, it can handle different audio qualities, making it suitable for various applications such as transcribing meeting records, automatically generating subtitles, and assisting in translations.<br />
<img src="https://hackster.imgix.net/uploads/attachments/1671539/image_hRNiVtcWKF.png?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /></p>
<h3>pymycobot</h3>
<p dir="auto"><a href="https://github.com/elephantrobotics/pymycobot/" target="_blank" rel="noopener noreferrer nofollow ugc">https://github.com/elephantrobotics/pymycobot/</a><br />
pymycobot is a Python library for the mycobot robot series. MyCobot is a compact, multifunctional collaborative robotic arm suitable for education, research, and lightweight industrial applications. The PyMyCobot library provides a simple programming interface, enabling developers to control and program MyCobot robots for operations such as movement, grabbing, and sensing. This library supports multiple operating systems and development environments, facilitating its integration into various projects, especially in robotics and automation. By using Python, a widely-used programming language, pymycobot makes operating and experimenting with MyCobot robots more accessible and flexible.</p>
<h2>Hardware</h2>
<h3>myCobot 280M5</h3>
<p dir="auto">The myCobot 280 M5 is a desktop-level compact six-axis collaborative robot produced by Elephant Robotics. Designed for compactness, it is suitable for education, research, and light industrial applications. The myCobot 280 M5 supports various programming and control methods, compatible with different operating systems and programming languages, including:<br />
●Main and auxiliary control chips: ESP32<br />
●Supports Bluetooth (2.4G/5G) and wireless (2.4G 3D Antenna)<br />
●Multiple input and output ports<br />
●Supports free movement, joint movement, Cartesian movement, trajectory recording, and wireless control<br />
●Compatible operating systems: Windows, Linux, MAC<br />
●Supported programming languages: Python, C++, C#, JavaScript<br />
●Supported programming platforms and tools: RoboFlow, myblockly, Mind+, UiFlow, Arduino, mystudio<br />
●Supported communication protocols: Serial port control protocol, TCP/IP, MODBUS<br />
These features make the myCobot 280 M5 a versatile, user-friendly robot solution suitable for a variety of application scenarios.<br />
<img src="https://hackster.imgix.net/uploads/attachments/1671541/_jrCeeyf5SM.blob?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /></p>
<h3>myCobot Vertical Suction Pump V2.0</h3>
<p dir="auto">Operates on the principle of vacuum adhesion, providing 3.3V IO control, and can be extensively used in the development of various embedded devices.<br />
<img src="https://hackster.imgix.net/uploads/attachments/1671545/image_95m3eTMvvl.png?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /></p>
<h3>Camera</h3>
<p dir="auto">Standard USB and LEGO interfaces. The USB interface can be used with various PC devices, and the LEGO interface can be conveniently fixed. It is applicable to machine vision, image recognition, and other applications.<br />
<img src="https://hackster.imgix.net/uploads/attachments/1671547/image_Vsk67uhSCo.png?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /><br />
mylangrobot Software Analysis<br />
The specific workflow of the project described at the beginning is as follows:</p>
<ol>
<li>Audio Input: Record audio instructions first.</li>
<li>Audio Processing: Use "openai-whisper" to process the audio and convert it into text.</li>
<li>Language Model Interaction: Use the GPT-4 model to process the converted text instructions and understand the user's commands.</li>
<li>Image Processing: Use GPT-4V and the enhanced image capability of SoM to process images and find the target mentioned in the instructions.</li>
<li>Robotic Arm Control: Control the robotic arm to grab the identified target.</li>
</ol>
<p dir="auto">Audio Processing<br />
This function utilizes speech_recognition to capture audio data from the microphone, enabling the computer to recognize it.<br />
Libraries used:</p>
<pre><code>import io
import os
from enum import Enum
from typing import Protocol

import openai
import speech_recognition as sr
from pydub import AudioSegment
from pydub.playback import play
</code></pre>
<p dir="auto">Define interfaces, capture user input, and provide output to the user.</p>
<pre><code>class Interface(Protocol):
    def input(self, prefix: str = "") -&gt; str:
        return prefix + self._input_impl()

    def _input_impl(self) -&gt; str:
        ...

    def output(self, message: str) -&gt; None:
        ...
</code></pre>
<p dir="auto">Initialize the microphone for audio input and output.</p>
<pre><code>class Audio(Interface):
    def __init__(self):
        self.r = sr.Recognizer()
        self.mic = sr.Microphone()
        # openai-whisper API key
        self.client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
</code></pre>
<p dir="auto">Convert the input audio into text format for output.</p>
<pre><code> def _input_impl(self) -&gt; str:
        print("Please tell me your command.")
        with self.mic as source:
            self.r.adjust_for_ambient_noise(source)
            audio = self.r.listen(source)

        try:
            return self.r.recognize_whisper(audio, language="japanese")

        except sr.UnknownValueError:
            print("could not understand audio")
        except sr.RequestError as e:
            print("Could not request results from Google Speech Recognition service; {0}".format(e))
</code></pre>
<p dir="auto">The final return 'r' is the text format of the audio, which can be used for interaction with the GPT-4 model.</p>
<h3>Image Processing and GPT-4 Language Interaction</h3>
<p dir="auto">When transmitting text to the GPT-4 model for interaction, images are sent along, so image processing and interaction are discussed together.<br />
Libraries used for image processing:</p>
<pre><code>import cv2
import numpy as np
import supervision as sv
import torch
from segment_anything import SamAutomaticMaskGenerator, sam_model_registry

from .utils import download_sam_model_to_cache
</code></pre>
<p dir="auto">Primarily uses the SamAutomaticMaskGenerator feature to mark and draw markers on detected targets.</p>
<pre><code>#Convert image to RGB format
        image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
#Image processing, target detection and marker rendering
        sam_result = self.mask_generator.generate(image_rgb)
        detections = sv.Detections.from_sam(sam_result=sam_result)
        height, width, _ = image.shape
        image_area = height * width

        min_area_mask = (detections.area / image_area) &gt; self.MIN_AREA_PERCENTAGE
        max_area_mask = (detections.area / image_area) &lt; self.MAX_AREA_PERCENTAGE
        detections = detections[min_area_mask &amp; max_area_mask]
        
        
        #Returns the result of the image and detected information
        labels = [str(i) for i in range(len(detections))]
        annotated_image = mask_annotator.annotate(scene=image_rgb.copy(), detections=detections)
        annotated_image = label_annotator.annotate(scene=annotated_image, detections=detections, labels=labels)
        return annotated_image, detections

</code></pre>
<p dir="auto"><img src="https://hackster.imgix.net/uploads/attachments/1671548/image_aiT6dtqGse.png?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /><br />
<img src="https://hackster.imgix.net/uploads/attachments/1671549/image_j8UHTvji8R.png?auto=compress%2Cformat&amp;w=740&amp;h=555&amp;fit=max" alt="alt text" class=" img-fluid img-markdown" /><br />
This results in the following effect.<br />
Note: The below function requires obtaining the GPT-4 API-Key for usage.<br />
The resulting image is passed to the GPT-4 model, which requires some processing before use. Through GPT-4V, the image can be processed to return information about the image content and corresponding object information.</p>
<pre><code>def prepare_inputs(message: str, image: np.ndarray) -&gt; dict:
    # # Path to your image
    # image_path = "temp.jpg"
    # # Getting the base64 string
    base64_image = encode_image_from_cv2(image)

    payload = {
        "model": "gpt-4-vision-preview",
        "messages": [
            {"role": "system", "content": [metaprompt]},
            {
                "role": "user",
                "content": [
                    {
                        "type": "text",
                        "text": message,
                    },
                    {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{base64_image}"}},
                ],
            },
        ],
        "max_tokens": 800,
    }

    return payload


def request_gpt4v(message: str, image: np.ndarray) -&gt; str:
    payload = prepare_inputs(message, image)
    response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload)
    res = response.json()["choices"][0]["message"]["content"]
    return res
</code></pre>
<p dir="auto">Robotic Arm Control and Overall Integration<br />
After image processing and GPT-4V model processing, the interpreted instructions generate target position information. This position information is passed to the robotic arm control system, which moves to the corresponding location to perform the grabbing action.<br />
Key methods involved:<br />
Move to the target object.</p>
<pre><code>    def move_to_object(self, object_no: int, speed: Optional[float] = None) -&gt; None:
        object_no = self._check_and_correct_object_no(object_no)
        print("[MyCobotController] Move to Object No. {}".format(object_no))
        detection = (
            np.array([-self._detections[object_no][0], -self._detections[object_no][1&rsqb;&rsqb;) + self.capture_coord.pos[:2]
        )
        print("[MyCobotController] Object pos:", detection[0], detection[1])
        self.move_to_xy(detection[0], detection[1], speed)
</code></pre>
<pre><code>grab action
  def grab(self, speed: Optional[float] = None) -&gt; None:
        print("[MyCobotController] Grab to Object")
        current_pos = self.current_coords().pos
        self.move_to_z(self.object_height + self.end_effector_height, speed)
        self._mycobot.set_basic_output(self._suction_pin, 0)
        time.sleep(2)
        self.move_to_z(current_pos[2], speed)

drop action
    def move_to_place(self, place_name: str, speed: Optional[float] = None) -&gt; None:
        print("[MyCobotController] Move to Place {}".format(place_name))
        self._current_position = self.positions[place_name]
        self._mycobot.sync_send_angles(
            np.array(self._current_position) + self.calc_gravity_compensation(self._current_position),
            speed or self._default_speed,
            self._command_timeout,
        )
        print("Current coords: {}".format(self.current_coords()))
</code></pre>
<p dir="auto">After each function is implemented, coordinate the entire process, streamline the workflow logic, and complete the task.<br />
The specific code can be viewed in the <a href="http://operator.py" target="_blank" rel="noopener noreferrer nofollow ugc">operator.py</a> file.</p>
<h2>Example</h2>
<p dir="auto">Below is an example test to observe the project's outcome. The content involves a voice input saying "pick up the chocolate," and the robotic arm executes the task.<br />
<a href="https://youtu.be/Eda1m7DnIhQ" target="_blank" rel="noopener noreferrer nofollow ugc">https://youtu.be/Eda1m7DnIhQ</a></p>
<h2>Summary</h2>
<p dir="auto">This project demonstrates how to leverage advanced artificial intelligence and robotics technologies to accomplish complex automation tasks. By integrating voice recognition, natural language processing, image analysis, and precise robotic arm control, the project has successfully created a robotic system capable of understanding and executing spoken instructions. This not only enhances the naturalness and efficiency of robot-human interaction but also opens up new possibilities for robotic technology in various practical applications, such as automated manufacturing, logistics, assistive robots, and more.<br />
Finally, thanks again to Shirokuma for sharing this case with us. If you have better examples, feel free to contact us!</p>
]]></description><link>https://community.m5stack.com/topic/5970/ai-enhanced-robotics-gpt-4v-and-mycobot-in-action</link><guid isPermaLink="true">https://community.m5stack.com/topic/5970/ai-enhanced-robotics-gpt-4v-and-mycobot-in-action</guid><dc:creator><![CDATA[ElephantRobotics]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[Harnessing the Power of ROS and MoveIt in Robotic Arm Manipulation]]></title><description><![CDATA[<h2>Introduction</h2>
<p dir="auto">This article will document my experience using myCobot 280 M5stack in ROS. Why use ROS? Because when it comes to robotics, it's hard to avoid the ROS operating system.<br />
Today is our first time using this system. Today, I will start with an introduction to ROS, the configuration of the environment, and the use of mycobot280 in ROS.</p>
<h2>ROS</h2>
<p dir="auto">ROS (Robot Operating System) is a flexible framework for writing robot software. It is a semi-open source project that provides researchers and developers with a set of tools and libraries to help them create various robot applications. ROS is mainly used for research and development of various robot systems, such as autonomous vehicles, service robots, industrial robots, drones, etc. It makes the development of robots more efficient and enhances the functionality and flexibility of robots. The two most notable modules in the ROS operating system are navigation and robotic arm control (moveit). Moveit allows users to quickly build robotic arm models and implement robotic arm control, such as modeling, kinematic solution, motion planning, obstacle avoidance, and more. Given such a powerful system, I want to use it to simulate some functions of the robotic arm, such as obstacle avoidance, path planning, etc.<br />
<img src="https://community.robotshop.com/forum/uploads/default/original/3X/e/9/e97ac6f06d51ef35cc4eb851b239108eed920b47.png" alt="alt text" class=" img-fluid img-markdown" /></p>
<h2>Environment configuration</h2>
<p dir="auto">ROS primarily supports operating systems like Ubuntu (Linux) and Mac OS (with partial support). The most common usage is on the Ubuntu system. My computer is installed with Ubuntu 18.04 version. The corresponding ROS version for Ubuntu 18 is Melodic. Different versions of Ubuntu correspond to different ROS versions.</p>
<pre><code>ubuntu16.04-kinetic
ubuntu18.04-melodic
ubuntu20.04-noetic
</code></pre>
<p dir="auto">Since my computer has already had the system and ROS environment installed, I won’t record each step here. The main focus will be on the operations within ROS. For the installation of the Ubuntu system and the setup of the ROS environment, you can search on Google. The ROS official website also has detailed installation tutorials.<br />
The scenario I’m using today involves controlling a robotic arm with MoveIt, so the demand for computer hardware is not high. Please note that if you are using features like SLAM radar navigation, you might need a higher-spec computer to handle the computational power. The standard to check whether ROS has been successfully installed is to open the terminal and type “roscore”. If the content that appears is the same as the image below, it means the installation was successful.<br />
<img src="https://community.robotshop.com/forum/uploads/default/original/3X/8/f/8ff4cf94f5fd258d418565c044be3c87cb07c988.png" alt="alt text" class=" img-fluid img-markdown" /><br />
Let’s use the ROS!</p>
<h2>Project</h2>
<p dir="auto">To create a robotic arm in ROS and use some path planning features, you need to perform the following steps:</p>
<ul>
<li>
<p dir="auto">list itemCreate a URDF file, which is the model of the robotic arm.</p>
</li>
<li>
<p dir="auto">list itemLoad the URDF file into the ROS server.</p>
</li>
<li>
<p dir="auto">list itemUse RViz for visualization.</p>
</li>
<li>
<p dir="auto">list itemCreate a MoveIt configuration package.</p>
</li>
<li>
<p dir="auto">list itemUse MoveIt for motion planning.</p>
</li>
</ul>
<h3>Creating a URDF file</h3>
<p dir="auto">Creating a URDF file First, we need to create a URDF (Unified Robot Description Format) file for the robotic arm. This file is crucial as it describes the properties of the robotic arm, such as joints, links, and sensors, in the form of a 3D model. Since the product is the myCobot 280 M5Stack robotic arm from Elephant Robotics, Elephant Robotics has provided a URDF file for the myCobot 280 on Github, which describes some physical properties of the robotic arm.<br />
<strong>How to generate a URDF file:</strong><br />
You need to use CAD software to create the model, ensuring that the 3D model you create matches the actual size of the robotic arm. Because the exported model needs to be in a format that ROS can accept, it is usually a DAE (Collada) file.<br />
<img src="https://community.robotshop.com/forum/uploads/default/optimized/3X/b/b/bb1e983714caf20cb9f849f5cb42a225cc70b966_2_132x230.png" alt="alt text" class=" img-fluid img-markdown" /><br />
<img src="https://community.robotshop.com/forum/uploads/default/original/3X/6/8/68d5ea40ef82c5aa920c0970dd514484d1e5a0c6.png" alt="alt text" class=" img-fluid img-markdown" /></p>
<h3>Load URDF to ROS server</h3>
<p dir="auto"><strong>Create a ROS package</strong><br />
To create a package to store the URDF file, execute the following code in the terminal:</p>
<pre><code>catkin_create_pkg mycobot_description rospy std_msgs sensor_msgs
</code></pre>
<p dir="auto">Here, ‘rospy’, ‘std_msgs’, and ‘sensor_msgs’ are dependency packages in ROS. Add the generated URDF file to the mycobot_description/urdf directory.</p>
<p dir="auto"><strong>Create a launch file</strong><br />
In the mycobot_description/launch directory, create a file named load_urdf.launch and add the following content to it.</p>
<pre><code>&lt;launch&gt;  &lt;param name="robot_description" command="$(find xacro)/xacro.py $(find mycobot_description)/urdf/mycobot.urdf.xacro" /&gt;&lt;/launch&gt;
</code></pre>
<p dir="auto"><strong>Lanuch ROS file</strong><br />
Open the terminal and enter the following command:</p>
<pre><code>roslaunch my_robot_description load_urdf.launch
</code></pre>
<p dir="auto">After running, we can see the model we created in RViz.<br />
<img src="https://community.robotshop.com/forum/uploads/default/original/3X/1/4/14382b10b51a7b9fc990a15de195dad00d0fb271.png" alt="alt text" class=" img-fluid img-markdown" /></p>
<h2>Configure MoveIt</h2>
<p dir="auto">Start MoveIt Setup Assistant:</p>
<p dir="auto">In a terminal, run the following command to start MoveIt Setup Assistant:</p>
<pre><code>roslaunch moveit_setup_assistant setup_assistant.launch
</code></pre>
<p dir="auto"><img src="https://community.robotshop.com/forum/uploads/default/original/3X/3/2/324920697c3a5020b017261cc3c11257f408f015.png" alt="alt text" class=" img-fluid img-markdown" /><br />
Load URDF in MoveIt In the main interface of MoveIt Setup Assistant, click the “Create New MoveIt Configuration Package” button. Then, in the pop-up dialogue box, select my URDF file (in the ROS parameter robot_description). Click the “Load Files” button to load your URDF file. If the image as shown below appears, it means the file has been loaded successfully.<br />
<img src="https://community.robotshop.com/forum/uploads/default/original/3X/a/d/ad048036c1f67ce310cf95cdbae162682e7bb202.png" alt="alt text" class=" img-fluid img-markdown" /></p>
<h2>Configure Robot</h2>
<p dir="auto">Configure your robot according to your needs. Here are a few configurations briefly introduced:</p>
<ul>
<li>Self-Collisions: MoveIt will automatically calculate the self-collision matrix of your robot. You can<br />
click the “Regenerate Default Collision Matrix” button to generate a default self-collision matrix.</li>
<li>Virtual Joints: You can define virtual joints for your robot. Virtual joints are often used to connect your robot to the world.</li>
<li>Planning Groups: You can define the planning groups of your robot. Planning groups are a set of joints and links that need to move together. For example, you can create a planning group that includes all arm joints.</li>
<li>Robot Poses: You can define preset poses for your robot. These preset poses can be reused in planning.</li>
</ul>
<p dir="auto">After the configuration is completed, a .launch file will be generated, and we can use the robotic arm in moveit by running it.<br />
<img src="https://community.robotshop.com/forum/uploads/default/original/3X/8/f/8f484e96903b260c9056f27f3d17f3294a721b22.png" alt="alt text" class=" img-fluid img-markdown" /><br />
Let’s see how the moveit function performs path planning. By dragging the coordinates of the end of the robotic arm, we can make changes to the movement of the robotic arm, and also randomly generate paths.<br />
<img src="https://community.robotshop.com/forum/uploads/default/original/3X/a/2/a2f8cb4d6678f2824942cc3b7a557c23cea30434.gif" alt="alt text" class=" img-fluid img-markdown" /><br />
<strong>Object obstacle avoidance</strong></p>
<p dir="auto">We add an obstacle in MoveIt for the robotic arm to avoid. We add a cube, so the robotic arm will navigate around it when it follows its trajectory.<br />
<img src="https://community.robotshop.com/forum/uploads/default/original/3X/a/5/a56b9dba256588bb104e91fca64e5e941bb6df94.gif" alt="alt text" class=" img-fluid img-markdown" /><br />
In addition to this, MoveIt has many features. For instance, motion planning, which can perform path planning for multi-joint robots, avoid collisions, and also use various motion planning algorithms like RRT, EST, LBKPIECE, etc. It also includes collision detection, which can carry out complex 3D collision detection, including self-collision and environmental collision (obstacles). Furthermore, it offers simulation capabilities, and can be integrated with the Gazebo robot simulator for realistic physical simulation, among other features.</p>
<h2>Summary</h2>
<p dir="auto">MoveIt provides a comprehensive set of tools for research into the motion planning and control of robotic arms, eliminating the need to build complex motion planning systems from scratch. This allows developers to focus more of their energy on implementing specific application functions, rather than on solving underlying motion planning and control problems. Overall, ROS and MoveIt provide strong support for the development of robotic technology. They not only reduce development difficulties and improve development efficiency, but also promote innovation and application in robotic technology. Whether you are a researcher in robotic technology or a developer in practical applications, ROS and MoveIt will be indispensable tools for you.</p>
<p dir="auto">In the future, I will continue to explore more robotic arm projects and share my findings and achievements from time to time. If you find any errors in this article, feel free to communicate with me. If you like this article, your likes and comments will be the greatest support for me!</p>
]]></description><link>https://community.m5stack.com/topic/5650/harnessing-the-power-of-ros-and-moveit-in-robotic-arm-manipulation</link><guid isPermaLink="true">https://community.m5stack.com/topic/5650/harnessing-the-power-of-ros-and-moveit-in-robotic-arm-manipulation</guid><dc:creator><![CDATA[Yuly12]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[Smart Applications of Holography and Robotic Arms myCobot 320 M5Stack-Basic]]></title><description><![CDATA[<p dir="auto"><a class="mention plugin-mentions-user plugin-mentions-a" href="https://community.m5stack.com/uid/21791">@kkkkkk</a><br />
Hi, you can click the <a href="https://www.dseelab.com/home" target="_blank" rel="noopener noreferrer nofollow ugc">link</a> to know more about it</p>
]]></description><link>https://community.m5stack.com/topic/5096/smart-applications-of-holography-and-robotic-arms-mycobot-320-m5stack-basic</link><guid isPermaLink="true">https://community.m5stack.com/topic/5096/smart-applications-of-holography-and-robotic-arms-mycobot-320-m5stack-basic</guid><dc:creator><![CDATA[ElephantRobotics]]></dc:creator><pubDate>Invalid Date</pubDate></item><item><title><![CDATA[The Ultimate Robotics Comparison: A Deep Dive into the Upgraded Robot AI Kit 2023]]></title><description><![CDATA[<p dir="auto">You can use the robotics arm with AI Kit, only download the project.</p>
]]></description><link>https://community.m5stack.com/topic/5060/the-ultimate-robotics-comparison-a-deep-dive-into-the-upgraded-robot-ai-kit-2023</link><guid isPermaLink="true">https://community.m5stack.com/topic/5060/the-ultimate-robotics-comparison-a-deep-dive-into-the-upgraded-robot-ai-kit-2023</guid><dc:creator><![CDATA[ElephantRobotics]]></dc:creator><pubDate>Invalid Date</pubDate></item></channel></rss>