WOW! The project looks interesting! Do you have ideas which are using the end-effector to complete some motion?

ElephantRobotics
@ElephantRobotics
Enjoy robots world
Posts made by ElephantRobotics
-
RE: Realizing Voice Control for Robotic Arm Movement -M5Stack-base
-
RE: Smart Applications of Holography and Robotic Arms myCobot 320 M5Stack-Basic
@holofloh
Thank you for your comment!
We have also looked up relevant information and realized that this is not holographic technology, which was our mistake. The technology we used in the project is auto-stereoscopic imaging technology, based on the principle of Perspective Of View (POV), which does not achieve as good of an effect as holography。
Thank you once again for your message! We will try to use real holographic technology in combination with robotic arms in the future! -
Smart Applications of Holography and Robotic Arms myCobot 320 M5Stack-Basic
Introduction
Do you think this display is innovative and magical? Actually, this is a technology called holographic projection. Holographic technology has become a part of our daily lives, with applications covering multiple fields. In the entertainment industry, holographic technology is used in movie theaters, game arcades, and theme parks. Through holographic projection technology, viewers can enjoy more realistic visual effects, further enhancing their entertainment experience. In the medical field, holographic technology is widely used in medical diagnosis and surgery. By presenting high-resolution 3D images, doctors can observe the condition more accurately, improving the effectiveness of diagnosis and surgery. In the education field, holographic technology is used to create teaching materials and science exhibitions, helping students better understand and master knowledge. In addition, holographic technology is also applied in engineering manufacturing, safety monitoring, virtual reality and other fields, bringing more convenience and innovation to our lives. It is foreseeable that with the continuous development of technology and the continuous expansion of application scenarios, holographic technology will play a more important role in our future lives.
(Images from the internet)The main content of this article is to describe how to use myCobot320 M5Stack 2022 and DSee-65X holographic projection equipment to achieve naked-eye 3D display.
This project is jointly developed by Elephant Robotics and DSeeLab Hologram.
DSee-65X holographic equipment:
We take a brief look at how the holographic influence is generated.The holographic screen is a display device that uses the technical principle of persistence of vision (POV) (after-image of moving objects) to achieve 3D holographic visual enhancement effect, air suspension, holographic stereo display effect by rotating imaging with ultra-high density LED light,break the limitation and boring of traditional flat display, real-time synchronization and interactive development can also be carried out, leading the new trend of commercial holographic display industry.DSee-65X is a product of DSee Lab Hologram, a company that specializes in holographic technology.
DSee-65X: high resolution, high brightness, supports various content formats, WiFi connection, APP operation, cloud remote cluster control, unlimited splicing for large screen display, 30,000 hours of continuous operation.
Here is a video introduction of DSee-65X.
https://youtu.be/UDXlNgjwQ8c
myCobot 320 M5Stack 2022
myCobot 320 M5Stack is an upgraded version of the myCobot 280 product, mainly suitable for makers and researchers, and can be customized according to user needs through secondary development. It has three major advantages of usability, safety, and economy, with a sophisticated all-in-one design. The myCobot 320 weighs 3kg, has a payload of 1kg, a working radius of 350mm, and is relatively compact but powerful. It is easy to operate, can collaborate with humans, and work safely. The myCobot 320 2022 is equipped with a variety of interfaces and can quickly adapt to various usage scenarios.
Here is a video presentation of the myCobot 320 M5Stack 2022
https://youtu.be/B14BS6I-uS4Introduction of the two devices complete, next step is to combine the holographic device with the robotic arm to work together. The operation of this project is very simple and can be divided into two steps:
-
Install the DSee-65X at the end of myCobot 320.
-
Control myCobot 320 to perform a beautiful trajectory to display the holographic image.
Project
Installation
DSee-65X and myCobot320 M5Stack 2022 are products from two different companies. When we received them, we found that we couldn't directly install the holographic device on the end of myCobot320. Therefore, we needed to modify the holographic device.
This is the structure at the end of myCobot320
This is the DSee-65X
According to the provided information, we added a board as a bridge between them for adaptation.
The maximum load of myCobot320 can reach up to 1kg, so this modification is completely feasible for it.
Controlling Robotics Arm
Our goal is to design a trajectory for the myCobot 320 robotic arm that ensures an unobstructed view of the hologram display.
The myCobot 320 has a rich interface and supports Python, C++, C#, JavaScript, Arduino, ROS, and more. Next, we will program it. Here we use a very easy-to-learn method. The method is to use myBlockly software for programming. myBlockly is a graphical programming software that allows code to be written by drag and drop.
The code in the picture is a graphic code for the trajectory of the myCobot 320.
myBlockly's underlying code is written in Python, so we can also directly use Python code to control the robotic arm. The following is an example of Python code:
import time from pymycobot.mycobot import MyCobot mc = MyCobot('/dev/ttyUSB0') mc.set_speed(60) # move to a home position mc.send_angles([0, -90, 90, 0, 0, 0], 80) time.sleep(1) # move to a new position mc.send_angles([0, -90, 90, 0, 0, 30], 80) time.sleep(1) # move to another position mc.send_angles([0, -90, 90, 0, 30, 30], 80) time.sleep(1) # move to a final position mc.send_angles([0, -90, 90, 0, 30, 0], 80) time.sleep(1) mc.release_all_servos()
Briefly explain how to use the DSee-65X.
DSee-65X has its own dedicated LAN. By connecting your computer to the same LAN, you can launch the software to make the holographic device work.
Summary
The whole process seems to be just a display of holographic imaging device with the robotic arm serving as a support. However, we can imagine more possibilities by using holographic projection technology to project 3D models or images into space and then capturing users' movements or gestures with sensors or cameras to control the robotic arm. For example, in manufacturing or logistics industries, combining robotic arms with holographic technology can achieve more efficient production and logistics operations. In the medical field, using robotic arms and holographic technology can achieve more precise surgery and treatment. In short, combining robotic arms and holographic technology can bring more intelligent and precise control and operation methods for various application scenarios, improving production efficiency and work quality.
These are all areas that require creative minds like yours to put in effort and develop! Please feel free to leave your ideas in the comments below and let's discuss together how to create more interesting projects.
-
-
RE: Building a Smart Navigation System using myCobot M5Stack-Base and myAGV
This project is developed by users. As you said, the whole project has not been automated yet. It is currently only in the development stage, and the automation function may be completed in the future, and we will continue to follow up.
-
Building a Smart Navigation System using myCobot M5Stack-Base and myAGV
Introduction
As a developer, I am currently involved in an interesting project to combine a SLAM (Simultaneous Localization and Mapping) car, myAGV, with a small six-axis robotic arm, myCobot 280 M5Stack, for research on logistics automation in education and scientific fields.
myAGV is a small car that can perform mapping and navigation and uses Raspberry Pi 4B as the controller. It can locate and move indoors and outdoors. MyCobot280 is a small collaborative robotic arm with six degrees of freedom that can accomplish various tasks in limited space.
My project goal is to integrate these two devices to achieve automated logistics transportation and placement. We plan to use open-source software and existing algorithms to achieve autonomous navigation, localization, mapping, object grasping, and placement functions. Through documenting the process in this article, we aim to share our journey in developing this project.
The equipment that I am using includes:
myAGV, a SLAM car that is capable of mapping and navigation.
myCobot280 M5Stack, a six-axis collaborative robotic arm with a complete API interface that can be controlled via Python.
An adaptive gripper that can be mounted as an end effector with MyCobot280, which is capable of grasping objects.
Development environment:
Ubuntu 18.04, Python 3.0+, ROS1.
Note: myAGV is controlled by Raspberry Pi 4B, and all environment configurations are based on the configurations provided on the Raspberry Pi.
Project
The picture below shows the general flow of this project.
I split the function into one, a small part to be implemented independently and finally integrated together.
myAGV
Firstly, I am working on the functions of myAGV, to perform mapping and automated navigation. I am implementing these functions based on the information provided in the official Gitbook.
I am using the gmapping algorithm to perform mapping. Gmapping, also known as grid-based mapping, is a well-established algorithm for generating 2D maps of indoor environments. It works by building a grid map of the environment using laser range finder data, which can be obtained from the sensors mounted on myAGV.
It's worth noting that I have tried myAGV in various scenarios, and the mapping performance is good when the environment is relatively clean. However, when the surrounding area is complex, the mapping results may not be as good. I will try to improve it by modifying the hardware or software in the future.
The picture below shows myAGV performing automatic navigation.
During automatic navigation, myAGV still experiences deviations. Implementing navigation functionality is quite complex because the navigation conditions are quite strict. It is necessary to adjust the actual position of myAGV after enabling navigation and turn in place to determine if the position is correct. There are still many areas for improvement in navigation functionality, such as automatically locating the position of the small car on the map after enabling navigation, among other aspects.
myCobot 280
After handling the myAGV, the next step is to control the myCobot movement.
Here, I use Python to control myCobot 280. Python is an easy-to-use programming language, and myCobot's Python API is also quite comprehensive. Below, I will briefly introduce several methods in pymycobot.
time.sleep() Function: Pause for a few seconds (the robotic arm needs a certain amount of time to complete its movement). send_angles([angle_list], speed) Function: Send the angle of each joint and the speed of operation to the robot arm. set_gripper_value(value, speed) Function: Controls the opening and closing of the jaws, 0 is closed 100 is open, 0 to 100 adjustable
Wrote a simple program to grab objects, see GIF demo.
Establishing communication
After dealing with the small functions, the next step is to establish communication between myCobot and myAGV.-
The controller of myAGV is a Raspberry Pi, which is a micro-computer (with Ubuntu 18.04 system) that can be programmed on it.
-
MyCobot 280 M5Stack needs to be controlled by commands sent from a computer.
Based on the above conditions, there are two ways to establish communication between them:
-
Serial communication: directly connect them using a TypeC-USB data cable (the simplest and most direct method).
-
Wireless connection: myCobot supports WIFI control, and commands can be sent by entering the corresponding IP address (more complicated and communication is not stable).
Here, I choose to use serial communication and directly connect them with a data cable.
Here I recommend a software called VNC Viewer, which is a cross-platform remote control software. I use VNC to remotely control myAGV, which is very convenient because I don't have to carry a monitor around.
If you have any better remote control software, you can leave a comment below to recommend it to me.
Let's see how the overall operation works.
Summary
In this project, only simple SLAM-related algorithms are used. The navigation algorithm needs to be further optimized to achieve more accurate navigation. As for the usage of myCobot, it is a relatively mature robotic arm with a convenient interface, and the end effectors provided by the Elephant Robotics can meet the requirements without the need to build a gripper for the project.
There are still many aspects of the project that need to be optimized, and I will continue to develop it in the future. Thank you for watching, and if you have any interest or questions, please feel free to leave a comment below.
-
-
RE: The Ultimate Robotics Comparison: A Deep Dive into the Upgraded Robot AI Kit 2023
You can use the robotics arm with AI Kit, only download the project.
-
The Ultimate Robotics Comparison: A Deep Dive into the Upgraded Robot AI Kit 2023
Introduction
AI Kit (Artificial Intelligence) is mainly designed to provide a set of kits suitable for beginners and professionals to learn and apply artificial intelligence. It includes robotic arms(myCobot280-M5Stack,mechArm270-M5Stack,myPalletizer260-M5Stack) and related software, hardware, sensors, and other devices, as well as supporting tutorials and development tools. The AI Kit aims to help users better understand and apply artificial intelligence technology and provide them with opportunities for practice and innovation. The latest upgrade will further enhance the functionality and performance of AI Kit 2023, making it more suitable for various scenarios and needs, including education, scientific research, manufacturing, and more.
Product Description
AI Kit is an entry-level artificial intelligence kit that combines visual, positioning, grabbing, and automatic sorting modules in one. The kit is based on the Python programming language and enables control of robotic arms through software development. With the ROS robot operating system in the Ubuntu system, a real 1:1 scene simulation model is established, allowing for quick learning of fundamental artificial intelligence knowledge, inspiring innovative thinking, and promoting open-source creative culture. This open-source kit has transparent designs and algorithms that can be easily used for specialized training platforms, robotics education, robotics laboratories, or individual learning and use.
Why upgrade AI Kit 2023?
The answer to why we upgraded AI Kit 2023 is multifaceted. First, we collected extensive feedback from our users and incorporated their suggestions into the new release. The upgraded version enhances the functionality and performance of the AI Kit, making it more suitable for various scenarios and industries such as education, research, and manufacturing. The following are some of the reasons for this.● Even with detailed installation instructions, installation environment setup for the AI Kit can still be challenging due to various reasons, causing inconvenience to users.
● The first generation of the AI Kit only has two recognition algorithms: color recognition and feature point recognition. We aim to provide a more diverse range of recognition algorithms.
● Due to the abundance of parts and complex device setups, the installation process of the AI Kit can be time-consuming and require a lot of adjustment.
Based on the above 3 points, we have begun optimizing and upgrading the AI Kit.
What aspects have been upgraded in AI Kit 2023?
Let’s take a look at a rough comparison table of the upgrades.
The additions to the functionality can be divided into two main areas of improvement.
One is the software upgrades, and the other is the hardware upgrades.
Let’s start by looking at the hardware upgrades.Hardware upgrades
The AI Kit 2023 has been upgraded in several aspects, as shown in the comparison table. The updated AI Kit has a clean and minimalist style with multiple hardware upgrades, including:-
list itemAcrylic board: upgraded in hardness and material
-
list itemCamera: upgraded to higher resolution and added a lighting lamp
-
list item External material of the camera: upgraded from plastic to metal
-
list item Suction pump: adjusted to suitable power (not too strong or weak) and upgraded interface (old models require an additional power supply interface)
-
list item Arm base: strengthened the fixing of the arm to make the arm movement more stable
-
list itemBucket/parts box: smaller in size for easier carrying and installation
Here is a video of unboxing the AI Kit 2023.
video
The overall impression is still very good, let’s take a look at the software upgrades that have been made.
Software upgrades
● Optimization of environment setup: In the previous version of the AI Kit, it needed to run on the ROS development environment. Based on user feedback that installing Linux, ROS, and other environments was difficult, we have loaded the program directly onto the Python environment. Compared to setting up Python and ROS environments, the former can be easily achieved.
● Upgrade of program UI: The previous version had a one-click start UI interface, which did not provide users with much information (similar to simple operations such as booting up). In the AI Kit 2023 program, a brand new UI interface has been designed, which can give users a refreshing feeling in terms of both aesthetics and functionality. It not only provides users with convenient operation, but also helps users to have a clearer understanding of the operation of the entire program.
From the figure, we can see the features of connecting the robotic arm, opening the camera, selecting recognition algorithms, and automatic startup. These designs can help users better understand the AI Kit.● Breakthroughs in recognition algorithms: In addition to the original color recognition and feature point recognition algorithms, the AI Kit has been expanded to include five recognition algorithms, which are color recognition, shape recognition, ArUco code recognition, feature point recognition, and YOLOv5 recognition. The first four recognition algorithms are based on the OpenCV open-source software library. YOLOv5 (You Only Look Once version 5) is a recent popular recognition algorithm and a target detection algorithm that has undergone extensive training.
The expansion of recognition algorithms is also intended to provide users with their own creative direction. Users can add other recognition algorithms to the existing AI Kit 2023.Summary
The upgrade of the AI Kit 2023 has been a great success, thanks to extensive user feedback and product planning. This upgrade provides users with a better learning and practical experience, helping them to master AI technology more easily. The new AI Kit also introduces many new features and improvements, such as more accurate algorithms, more stable performance, and a more user-friendly interface. In summary, the upgrade of the AI Kit 2023 is a very successful improvement that will bring better learning and practical experiences and a wider range of application scenarios to more users.
In the future, we will continue to adhere to the principle of putting users first, continuously collect and listen to user feedback and needs, and further improve and optimize the AI Kit 2023 to better meet user needs and application scenarios. We believe that with continuous effort and innovation, the AI Kit 2023 will become an even better AI Kit, providing better learning and practical experiences for users and promoting the development and application of AI technology.
-
-
RE: Facial Recognition and Tracking Project with mechArm M5stack
@ajb2k3 Thanks for your support, we will share more interesting projects in the future.If you want a mechArm, please contact us!
-
RE: Facial Recognition and Tracking Project with mechArm M5stack
@pengyuyan Of course! You need to do some modified to the code!