WOW! The project looks interesting! Do you have ideas which are using the end-effector to complete some motion?
Posts made by ElephantRobotics
RE: Realizing Voice Control for Robotic Arm Movement -M5Stack-base
RE: Smart Applications of Holography and Robotic Arms myCobot 320 M5Stack-Basic posted in PROJECTS
RE: Smart Applications of Holography and Robotic Arms myCobot 320 M5Stack-Basic
Thank you for your comment!
We have also looked up relevant information and realized that this is not holographic technology, which was our mistake. The technology we used in the project is auto-stereoscopic imaging technology, based on the principle of Perspective Of View (POV), which does not achieve as good of an effect as holography。
Thank you once again for your message! We will try to use real holographic technology in combination with robotic arms in the future!
Smart Applications of Holography and Robotic Arms myCobot 320 M5Stack-Basic
Do you think this display is innovative and magical? Actually, this is a technology called holographic projection. Holographic technology has become a part of our daily lives, with applications covering multiple fields. In the entertainment industry, holographic technology is used in movie theaters, game arcades, and theme parks. Through holographic projection technology, viewers can enjoy more realistic visual effects, further enhancing their entertainment experience. In the medical field, holographic technology is widely used in medical diagnosis and surgery. By presenting high-resolution 3D images, doctors can observe the condition more accurately, improving the effectiveness of diagnosis and surgery. In the education field, holographic technology is used to create teaching materials and science exhibitions, helping students better understand and master knowledge. In addition, holographic technology is also applied in engineering manufacturing, safety monitoring, virtual reality and other fields, bringing more convenience and innovation to our lives. It is foreseeable that with the continuous development of technology and the continuous expansion of application scenarios, holographic technology will play a more important role in our future lives.
(Images from the internet)
The main content of this article is to describe how to use myCobot320 M5Stack 2022 and DSee-65X holographic projection equipment to achieve naked-eye 3D display.
This project is jointly developed by Elephant Robotics and DSeeLab Hologram.
DSee-65X holographic equipment:
We take a brief look at how the holographic influence is generated.The holographic screen is a display device that uses the technical principle of persistence of vision (POV) (after-image of moving objects) to achieve 3D holographic visual enhancement effect, air suspension, holographic stereo display effect by rotating imaging with ultra-high density LED light,break the limitation and boring of traditional flat display, real-time synchronization and interactive development can also be carried out, leading the new trend of commercial holographic display industry.
DSee-65X is a product of DSee Lab Hologram, a company that specializes in holographic technology.
DSee-65X: high resolution, high brightness, supports various content formats, WiFi connection, APP operation, cloud remote cluster control, unlimited splicing for large screen display, 30,000 hours of continuous operation.
Here is a video introduction of DSee-65X.
myCobot 320 M5Stack 2022
myCobot 320 M5Stack is an upgraded version of the myCobot 280 product, mainly suitable for makers and researchers, and can be customized according to user needs through secondary development. It has three major advantages of usability, safety, and economy, with a sophisticated all-in-one design. The myCobot 320 weighs 3kg, has a payload of 1kg, a working radius of 350mm, and is relatively compact but powerful. It is easy to operate, can collaborate with humans, and work safely. The myCobot 320 2022 is equipped with a variety of interfaces and can quickly adapt to various usage scenarios.
Here is a video presentation of the myCobot 320 M5Stack 2022
Introduction of the two devices complete, next step is to combine the holographic device with the robotic arm to work together. The operation of this project is very simple and can be divided into two steps:
Install the DSee-65X at the end of myCobot 320.
Control myCobot 320 to perform a beautiful trajectory to display the holographic image.
DSee-65X and myCobot320 M5Stack 2022 are products from two different companies. When we received them, we found that we couldn't directly install the holographic device on the end of myCobot320. Therefore, we needed to modify the holographic device.
This is the structure at the end of myCobot320
This is the DSee-65X
According to the provided information, we added a board as a bridge between them for adaptation.
The maximum load of myCobot320 can reach up to 1kg, so this modification is completely feasible for it.
Controlling Robotics Arm
Our goal is to design a trajectory for the myCobot 320 robotic arm that ensures an unobstructed view of the hologram display.
The code in the picture is a graphic code for the trajectory of the myCobot 320.
myBlockly's underlying code is written in Python, so we can also directly use Python code to control the robotic arm. The following is an example of Python code:
import time from pymycobot.mycobot import MyCobot mc = MyCobot('/dev/ttyUSB0') mc.set_speed(60) # move to a home position mc.send_angles([0, -90, 90, 0, 0, 0], 80) time.sleep(1) # move to a new position mc.send_angles([0, -90, 90, 0, 0, 30], 80) time.sleep(1) # move to another position mc.send_angles([0, -90, 90, 0, 30, 30], 80) time.sleep(1) # move to a final position mc.send_angles([0, -90, 90, 0, 30, 0], 80) time.sleep(1) mc.release_all_servos()
Briefly explain how to use the DSee-65X.
DSee-65X has its own dedicated LAN. By connecting your computer to the same LAN, you can launch the software to make the holographic device work.
The whole process seems to be just a display of holographic imaging device with the robotic arm serving as a support. However, we can imagine more possibilities by using holographic projection technology to project 3D models or images into space and then capturing users' movements or gestures with sensors or cameras to control the robotic arm. For example, in manufacturing or logistics industries, combining robotic arms with holographic technology can achieve more efficient production and logistics operations. In the medical field, using robotic arms and holographic technology can achieve more precise surgery and treatment. In short, combining robotic arms and holographic technology can bring more intelligent and precise control and operation methods for various application scenarios, improving production efficiency and work quality.
These are all areas that require creative minds like yours to put in effort and develop! Please feel free to leave your ideas in the comments below and let's discuss together how to create more interesting projects.
RE: Building a Smart Navigation System using myCobot M5Stack-Base and myAGV
This project is developed by users. As you said, the whole project has not been automated yet. It is currently only in the development stage, and the automation function may be completed in the future, and we will continue to follow up.
Building a Smart Navigation System using myCobot M5Stack-Base and myAGV
As a developer, I am currently involved in an interesting project to combine a SLAM (Simultaneous Localization and Mapping) car, myAGV, with a small six-axis robotic arm, myCobot 280 M5Stack, for research on logistics automation in education and scientific fields.
myAGV is a small car that can perform mapping and navigation and uses Raspberry Pi 4B as the controller. It can locate and move indoors and outdoors. MyCobot280 is a small collaborative robotic arm with six degrees of freedom that can accomplish various tasks in limited space.
My project goal is to integrate these two devices to achieve automated logistics transportation and placement. We plan to use open-source software and existing algorithms to achieve autonomous navigation, localization, mapping, object grasping, and placement functions. Through documenting the process in this article, we aim to share our journey in developing this project.
The equipment that I am using includes:
myAGV, a SLAM car that is capable of mapping and navigation.
myCobot280 M5Stack, a six-axis collaborative robotic arm with a complete API interface that can be controlled via Python.
An adaptive gripper that can be mounted as an end effector with MyCobot280, which is capable of grasping objects.
Ubuntu 18.04, Python 3.0+, ROS1.
Note: myAGV is controlled by Raspberry Pi 4B, and all environment configurations are based on the configurations provided on the Raspberry Pi.
The picture below shows the general flow of this project.
I split the function into one, a small part to be implemented independently and finally integrated together.
Firstly, I am working on the functions of myAGV, to perform mapping and automated navigation. I am implementing these functions based on the information provided in the official Gitbook.
I am using the gmapping algorithm to perform mapping. Gmapping, also known as grid-based mapping, is a well-established algorithm for generating 2D maps of indoor environments. It works by building a grid map of the environment using laser range finder data, which can be obtained from the sensors mounted on myAGV.
It's worth noting that I have tried myAGV in various scenarios, and the mapping performance is good when the environment is relatively clean. However, when the surrounding area is complex, the mapping results may not be as good. I will try to improve it by modifying the hardware or software in the future.
The picture below shows myAGV performing automatic navigation.
During automatic navigation, myAGV still experiences deviations. Implementing navigation functionality is quite complex because the navigation conditions are quite strict. It is necessary to adjust the actual position of myAGV after enabling navigation and turn in place to determine if the position is correct. There are still many areas for improvement in navigation functionality, such as automatically locating the position of the small car on the map after enabling navigation, among other aspects.
After handling the myAGV, the next step is to control the myCobot movement.
Here, I use Python to control myCobot 280. Python is an easy-to-use programming language, and myCobot's Python API is also quite comprehensive. Below, I will briefly introduce several methods in pymycobot.
time.sleep() Function: Pause for a few seconds (the robotic arm needs a certain amount of time to complete its movement). send_angles([angle_list], speed) Function: Send the angle of each joint and the speed of operation to the robot arm. set_gripper_value(value, speed) Function: Controls the opening and closing of the jaws, 0 is closed 100 is open, 0 to 100 adjustable
Wrote a simple program to grab objects, see GIF demo.
After dealing with the small functions, the next step is to establish communication between myCobot and myAGV.
The controller of myAGV is a Raspberry Pi, which is a micro-computer (with Ubuntu 18.04 system) that can be programmed on it.
MyCobot 280 M5Stack needs to be controlled by commands sent from a computer.
Based on the above conditions, there are two ways to establish communication between them:
Serial communication: directly connect them using a TypeC-USB data cable (the simplest and most direct method).
Wireless connection: myCobot supports WIFI control, and commands can be sent by entering the corresponding IP address (more complicated and communication is not stable).
Here, I choose to use serial communication and directly connect them with a data cable.
Here I recommend a software called VNC Viewer, which is a cross-platform remote control software. I use VNC to remotely control myAGV, which is very convenient because I don't have to carry a monitor around.
If you have any better remote control software, you can leave a comment below to recommend it to me.
Let's see how the overall operation works.
In this project, only simple SLAM-related algorithms are used. The navigation algorithm needs to be further optimized to achieve more accurate navigation. As for the usage of myCobot, it is a relatively mature robotic arm with a convenient interface, and the end effectors provided by the Elephant Robotics can meet the requirements without the need to build a gripper for the project.
There are still many aspects of the project that need to be optimized, and I will continue to develop it in the future. Thank you for watching, and if you have any interest or questions, please feel free to leave a comment below.
RE: The Ultimate Robotics Comparison: A Deep Dive into the Upgraded Robot AI Kit 2023
You can use the robotics arm with AI Kit, only download the project.
The Ultimate Robotics Comparison: A Deep Dive into the Upgraded Robot AI Kit 2023
AI Kit (Artificial Intelligence) is mainly designed to provide a set of kits suitable for beginners and professionals to learn and apply artificial intelligence. It includes robotic arms(myCobot280-M5Stack,mechArm270-M5Stack,myPalletizer260-M5Stack) and related software, hardware, sensors, and other devices, as well as supporting tutorials and development tools. The AI Kit aims to help users better understand and apply artificial intelligence technology and provide them with opportunities for practice and innovation. The latest upgrade will further enhance the functionality and performance of AI Kit 2023, making it more suitable for various scenarios and needs, including education, scientific research, manufacturing, and more.
AI Kit is an entry-level artificial intelligence kit that combines visual, positioning, grabbing, and automatic sorting modules in one. The kit is based on the Python programming language and enables control of robotic arms through software development. With the ROS robot operating system in the Ubuntu system, a real 1:1 scene simulation model is established, allowing for quick learning of fundamental artificial intelligence knowledge, inspiring innovative thinking, and promoting open-source creative culture. This open-source kit has transparent designs and algorithms that can be easily used for specialized training platforms, robotics education, robotics laboratories, or individual learning and use.
Why upgrade AI Kit 2023?
The answer to why we upgraded AI Kit 2023 is multifaceted. First, we collected extensive feedback from our users and incorporated their suggestions into the new release. The upgraded version enhances the functionality and performance of the AI Kit, making it more suitable for various scenarios and industries such as education, research, and manufacturing. The following are some of the reasons for this.
● Even with detailed installation instructions, installation environment setup for the AI Kit can still be challenging due to various reasons, causing inconvenience to users.
● The first generation of the AI Kit only has two recognition algorithms: color recognition and feature point recognition. We aim to provide a more diverse range of recognition algorithms.
● Due to the abundance of parts and complex device setups, the installation process of the AI Kit can be time-consuming and require a lot of adjustment.
Based on the above 3 points, we have begun optimizing and upgrading the AI Kit.
What aspects have been upgraded in AI Kit 2023?
Let’s take a look at a rough comparison table of the upgrades.
The additions to the functionality can be divided into two main areas of improvement.
One is the software upgrades, and the other is the hardware upgrades.
Let’s start by looking at the hardware upgrades.
The AI Kit 2023 has been upgraded in several aspects, as shown in the comparison table. The updated AI Kit has a clean and minimalist style with multiple hardware upgrades, including:
list itemAcrylic board: upgraded in hardness and material
list itemCamera: upgraded to higher resolution and added a lighting lamp
list item External material of the camera: upgraded from plastic to metal
list item Suction pump: adjusted to suitable power (not too strong or weak) and upgraded interface (old models require an additional power supply interface)
list item Arm base: strengthened the fixing of the arm to make the arm movement more stable
list itemBucket/parts box: smaller in size for easier carrying and installation
Here is a video of unboxing the AI Kit 2023.
The overall impression is still very good, let’s take a look at the software upgrades that have been made.
● Optimization of environment setup: In the previous version of the AI Kit, it needed to run on the ROS development environment. Based on user feedback that installing Linux, ROS, and other environments was difficult, we have loaded the program directly onto the Python environment. Compared to setting up Python and ROS environments, the former can be easily achieved.
● Upgrade of program UI: The previous version had a one-click start UI interface, which did not provide users with much information (similar to simple operations such as booting up). In the AI Kit 2023 program, a brand new UI interface has been designed, which can give users a refreshing feeling in terms of both aesthetics and functionality. It not only provides users with convenient operation, but also helps users to have a clearer understanding of the operation of the entire program.
From the figure, we can see the features of connecting the robotic arm, opening the camera, selecting recognition algorithms, and automatic startup. These designs can help users better understand the AI Kit.
● Breakthroughs in recognition algorithms: In addition to the original color recognition and feature point recognition algorithms, the AI Kit has been expanded to include five recognition algorithms, which are color recognition, shape recognition, ArUco code recognition, feature point recognition, and YOLOv5 recognition. The first four recognition algorithms are based on the OpenCV open-source software library. YOLOv5 (You Only Look Once version 5) is a recent popular recognition algorithm and a target detection algorithm that has undergone extensive training.
The expansion of recognition algorithms is also intended to provide users with their own creative direction. Users can add other recognition algorithms to the existing AI Kit 2023.
The upgrade of the AI Kit 2023 has been a great success, thanks to extensive user feedback and product planning. This upgrade provides users with a better learning and practical experience, helping them to master AI technology more easily. The new AI Kit also introduces many new features and improvements, such as more accurate algorithms, more stable performance, and a more user-friendly interface. In summary, the upgrade of the AI Kit 2023 is a very successful improvement that will bring better learning and practical experiences and a wider range of application scenarios to more users.
In the future, we will continue to adhere to the principle of putting users first, continuously collect and listen to user feedback and needs, and further improve and optimize the AI Kit 2023 to better meet user needs and application scenarios. We believe that with continuous effort and innovation, the AI Kit 2023 will become an even better AI Kit, providing better learning and practical experiences for users and promoting the development and application of AI technology.
RE: Facial Recognition and Tracking Project with mechArm M5stack
@ajb2k3 Thanks for your support, we will share more interesting projects in the future.If you want a mechArm, please contact us!
RE: Facial Recognition and Tracking Project with mechArm M5stack
@pengyuyan Of course! You need to do some modified to the code！
RE: Facial Recognition and Tracking Project with mechArm M5stack
This item is reproduced from a user project
Facial Recognition and Tracking Project with mechArm M5stack
Long time no see, I'm back.
I'll give a report on the recent progress of the facial recognition and tracking project. For those who are new, let me briefly introduce what I am working on. I am using a desktop six-axis robotic arm with a camera mounted on the end for facial recognition and tracking. The project consists of two modules: one for facial recognition, and the other for controlling the movement of the robotic arm. I've previously discussed how the basic movement of the robotic arm is controlled and how facial recognition is implemented, so I won't go into those details again. This report will focus on how the movement control module was completed."
mechArm 270M5Stack, camera
Details of the equipment can be found in the previous article.
Motion control module
Next, I'll introduce the movement control module.
In the control module, the common input for movement control is the absolute position in Cartesian space. To obtain the absolute position, a camera and arm calibration algorithm, involving several unknown parameters, is needed. However, we skipped this step and chose to use relative displacement for movement control. This required designing a sampling movement mechanism to ensure that the face's offset is completely obtained in one control cycle and the tracking is implemented.
Therefore, to quickly present the entire function, I did not choose to use the hand-eye calibration algorithm to handle the relationship between the camera and arm. Because the workload of hand-eye calibration is quite large.
The code below shows how to obtain parameters from the information obtained by the facial recognition algorithm.
_, img = cap.read() # Converted to grey scale gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Detecting faces faces = face_cascade.detectMultiScale(gray, 1.1, 4) # Drawing the outline for (x, y, w, h) in faces: if w > 200 or w < 80: #Limit the recognition width to between 80 and 200 pixels continue cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 3) center_x = (x+w-x)//2+x center_y = (y+h-y)//2+y size_face = w
The obtained variables, center_x, center_y, and size_face, are used to calculate the position. Below is the code for the algorithm that processes the data to control the movement.
run_num = 20 #Control cycle of 20 frames if save_state == False: # Save a start point (save_x, save_y) save_x = center_x save_y = center_y save_z = size_face origin_angles = mc.get_angles() print("origin point = ", save_x, save_y, origin_angles) time.sleep(2); current_coords = mc.get_coords() save_state = TRUE else: if run_count > run_num: # Limit the control period to 20 frames run_count = 0 # Recording relative offsets error_x = center_x - save_x error_y = center_y - save_y error_z = size_face - save_z # Pixel differences are converted into actual offsets, which can be scaled and oriented trace_1 = -error_x * 0.15 trace_z = -error_y * 0.5 trace_x = -error_z * 2.0 # x/z axis offset, note that this is open loop control current_coords += trace_z current_coords += trace_x #Restricting the Cartesian space x\z range if current_coords < 70: current_coords = 70 if current_coords > 150: current_coords = 150 if current_coords < 220: current_coords = 220 if current_coords > 280: current_coords = 280 # Inverse kinematic solutions x = current_coords z = current_coords # print(x, z) L1 = 100; L3 = 96.5194; x = x - 56.5; z = z - 114; cos_af = (L1*L1 + L3*L3 - (x*x + z*z))/(2*L1*L3); cos_beta = (L1*L1 - L3*L3 + (x*x + z*z))/(2*L1*math.sqrt((x*x + z*z))); reset = False # The solution is only applicable to some poses, so there may be no solution if abs(cos_af) > 1: reset = True if reset == True: current_coords -= trace_z current_coords -= trace_x print("err = ",cos_af) continue af = math.acos(cos_af); beta = math.acos(cos_beta); theta2 = -(beta + math.atan(z/x) - math.pi/2); theta3 = math.pi/2 - (af - math.atan(10/96)); theta5 = -theta3 - theta2; cof = 57.295 #Curvature to angle move_juge = False # Limits the distance travelled, where trace_1 joint is in ° and trace_x/z is in mm if abs(trace_1) > 1 and abs(trace_1) < 15: move_juge = True if abs(trace_z) > 10 and abs(trace_z) < 50: move_juge = True if abs(trace_x) > 25 and abs(trace_x) < 80: move_juge = True if (move_juge == True): print("trace = ", trace_1, trace_z, trace_x) origin_angles += trace_1 origin_angles = theta2*cof origin_angles = theta3*cof origin_angles = theta5*cof mc.send_angles(origin_angles, 70) else: #Due to the open-loop control, if no displacement occurs the current coordinate value needs to be restored current_coords -= trace_z current_coords -= trace_x else: # 10 frames set aside for updating the camera coordinates at the end of the motion if run_count < 10: save_x = center_x save_y = center_y save_z = size_face run_count += 1
In the algorithm module, after obtaining the relative displacement, how to move the arm? To ensure the movement effect, we did not directly use the coordinate movement interface provided by Mecharm, but instead added the inverse kinematics part in python. For the specific posture, we calculated the inverse solution of the robotic arm and transformed the coordinate movement into angle movement to avoid singular points and other factors that affect the Cartesian space movement. Combining the code of the facial recognition part, the entire project is completed.
Let's look at the results together.
Normally, facial recognition has high computational requirements. Its algorithm mechanism repeatedly calculates adjacent pixels to increase recognition accuracy. We use MechArm 270-Pi, which uses a Raspberry Pi 4B as the processor for facial recognition. The computing power of the Raspberry Pi is 400MHZ. Due to the insufficient computing power of the Raspberry Pi, we simplified the process and changed the recognition mechanism to only a few times of fuzzy recognition. In our application, the background needs to be simpler."
The facial recognition and robotic arm tracking project is completed.
Key information about the project:
● In the case of low computing power, set a simple usage scenario to achieve smooth results
● Replace complex hand-eye calibration algorithms with relative position movement and use a sampling movement mechanism to ensure that the face's offset is completely obtained in one control cycle and the tracking is implemented.
● In python, added the inverse kinematics part, calculated the inverse solution of the robotic arm for specific postures, and converted the coordinate movement into angle movement to avoid singular points and other factors that affect the Cartesian space movement.
Some shortcomings of the project:
● There are certain requirements for the usage scenario, and a clean background is needed to run successfully (by fixing the scene, many parameters were simplified)
● As mentioned earlier, the computing power of the Raspberry Pi is insufficient, using other control boards, such as Jetson Nano (600MHZ) or high-performance image processing computers, would run smoother.
● Also, in the movement control module, because we did not do hand-eye calibration, only relative displacement can be used. The control is divided into "sampling stage" and "movement stage". Currently, it is preferable to require the lens to be stationary during sampling, but it is difficult to ensure that the lens is stationary, resulting in deviation in the coordinates when the lens is also moving during sampling.
Finally, I would like to specially thank Elephant Robotics for their help during the development of the project, which made it possible to complete it. The MechArm used in this project is a centrally symmetrical structured robotic arm with limitations in its joint movement. If the program is applied to a more flexible myCobot, the situation may be different.
If you have any questions about the project, please leave me a message below.
RE: Exploring the Advantages and Differences of Different Types of Robotic Arms in AI Kit
Thank you. If you had the choice, which robotic arm would you choose?
Exploring the Advantages and Differences of Different Types of Robotic Arms in AI Kit
This article is primarily about introducing 3 robotic arms that are compatible with AI Kit. What are the differences between them?
If you have a robotic arm, what would you use it for? Simple control of the robotic arm to move it around? Repeat a certain trajectory? Or allow it to work in the industry to replace humans? With the advancement of technology, robots are frequently appearing around us, replacing us in dangerous jobs and serving humanity. Let's take a look at how robotic arms work in an industrial setting.
what is AI Kit？
The AI Kit is an entry-level artificial intelligence Kit that integrates vision, positioning, grasping, and automatic sorting modules. Based on the Linux system and built-in ROS with a 1:1 simulation model, the AI Kit supports the control of the robotic arm through the development of software, allowing for a quick introduction to the basics of artificial intelligence.
Currently, AI Kit can achieve color and image recognition, automatic postioning and sorting. This Kit is very helpful for users who are new to robotic arms and machine vision, as it allows you to quickly understand how artificial intelligence projects are built and learn more about how machine vision works with robotic arms.
Next, let's briefly introduce the 3 robotic arms that are compatible with the AI Kit.
The AI Kit can be adapted for use with myPalletizer 260 M5Stack, myCobot 280 M5Stack and mechArm 270 M5Stack.All three robotic arms are equipped with the M5Stack-Basic and the ESP32-ATOM.
myPalletizer260 is a lightweight 4-axis robotic arm, it is compact and easy to carry. The myPalletizer weighs 960g, has a 250g payload, and has a working radius of 260mm. It is explicitly designed for makers and educators and has rich expansion interfaces.
mechArm 270 is a small 6-axis robotic arm with a center-symmetrical structure (like an industrial structure). The mechArm 270 weighs 1kg with a payload of 250g, and has a working radius of 270mm. As the most compact collaborative robot, mechArm is small but powerful.
myCobot 280 is the smallest and lightest 6-axis collaborative robotic arm (UR structure) in the world, which can be customized according to user needs. The myCobot has a self-weight of 850g, an effective load of 250g, and an effective working radius of 280mm. It is small but powerful and can be used with various end effectors to adapt to various application scenarios, as well as support the development of software on multiple platforms to meet the needs of various scenarios, such as scientific research and education, smart home, and business pre R&D.
Let's watch a video to see how AI Kit works with these 3 robotic arms.
The video shows the color recognition and intelligent sorting function, as well as the image recognition and intelligent sorting function. Let's briefly introduce how AI Kit is implemented (using the example of the color recognition and intelligent sorting function).
This artificial intelligence project mainly uses two modules:
●Vision processing module
●Computation module (handles the conversion between eye to hand)
Vision processing module
OpenCV (Open Source Computer Vision) is an open-source computer vision library used to develop computer vision applications. OpenCV includes a large number of functions and algorithms for image processing, video analysis, deep learning based object detection and recognition, and more.
We use OpenCV to process images. The video from the camera is processed to obtain information from the video such as color, image, and the plane coordinates (x, y) in the video. The obtained information is then passed to the processor for further processing.
Here is part of the code to process the image (colour recognition)
# detect cube color def color_detect(self, img): # set the arrangement of color'HSV x = y = 0 gs_img = cv2.GaussianBlur(img, (3, 3), 0) # Gaussian blur # transfrom the img to model of gray hsv = cv2.cvtColor(gs_img, cv2.COLOR_BGR2HSV) for mycolor, item in self.HSV.items(): redLower = np.array(item) redUpper = np.array(item) # wipe off all color expect color in range mask = cv2.inRange(hsv, item, item) # a etching operation on a picture to remove edge roughness erosion = cv2.erode(mask, np.ones((1, 1), np.uint8), iterations=2) # the image for expansion operation, its role is to deepen the color depth in the picture dilation = cv2.dilate(erosion, np.ones( (1, 1), np.uint8), iterations=2) # adds pixels to the image target = cv2.bitwise_and(img, img, mask=dilation) # the filtered image is transformed into a binary image and placed in binary ret, binary = cv2.threshold(dilation, 127, 255, cv2.THRESH_BINARY) # get the contour coordinates of the image, where contours is the coordinate value, here only the contour is detected contours, hierarchy = cv2.findContours( dilation, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) if len(contours) > 0: # do something about misidentification boxes = [ box for box in [cv2.boundingRect(c) for c in contours] if min(img.shape, img.shape) / 10 < min(box, box) < min(img.shape, img.shape) / 1 ] if boxes: for box in boxes: x, y, w, h = box # find the largest object that fits the requirements c = max(contours, key=cv2.contourArea) # get the lower left and upper right points of the positioning object x, y, w, h = cv2.boundingRect(c) # locate the target by drawing rectangle cv2.rectangle(img, (x, y), (x+w, y+h), (153, 153, 0), 2) # calculate the rectangle center x, y = (x*2+w)/2, (y*2+h)/2 # calculate the real coordinates of mycobot relative to the target if mycolor == "red": self.color = 0 elif mycolor == "green": self.color = 1 elif mycolor == "cyan" or mycolor == "blue": self.color = 2 else: self.color = 3 if abs(x) + abs(y) > 0: return x, y else: return None
Just obtaining image information is not enough, we must process the obtained data and pass it on to the robotic arm to execute commands. This is where the computation module comes in.
NumPy (Numerical Python) is an open-source Python library mainly used for mathematical calculations. NumPy provides many functions and algorithms for scientific calculations, including matrix operations, linear algebra, random number generation, Fourier transform, and more. We need to process the coordinates on the image and convert them to real coordinates, a specialized term called eye to hand. We use Python and the NumPy computation library to calculate our coordinates and send them to the robotic arm to perform sorting.
Here is part of the code for the computation.
while cv2.waitKey(1) < 0: # read camera _, frame = cap.read() # deal img frame = detect.transform_frame(frame) if _init_ > 0: _init_ -= 1 continue # calculate the parameters of camera clipping if init_num < 20: if detect.get_calculate_params(frame) is None: cv2.imshow("figure", frame) continue else: x1, x2, y1, y2 = detect.get_calculate_params(frame) detect.draw_marker(frame, x1, y1) detect.draw_marker(frame, x2, y2) detect.sum_x1 += x1 detect.sum_x2 += x2 detect.sum_y1 += y1 detect.sum_y2 += y2 init_num += 1 continue elif init_num == 20: detect.set_cut_params( (detect.sum_x1)/20.0, (detect.sum_y1)/20.0, (detect.sum_x2)/20.0, (detect.sum_y2)/20.0, ) detect.sum_x1 = detect.sum_x2 = detect.sum_y1 = detect.sum_y2 = 0 init_num += 1 continue # calculate params of the coords between cube and mycobot if nparams < 10: if detect.get_calculate_params(frame) is None: cv2.imshow("figure", frame) continue else: x1, x2, y1, y2 = detect.get_calculate_params(frame) detect.draw_marker(frame, x1, y1) detect.draw_marker(frame, x2, y2) detect.sum_x1 += x1 detect.sum_x2 += x2 detect.sum_y1 += y1 detect.sum_y2 += y2 nparams += 1 continue elif nparams == 10: nparams += 1 # calculate and set params of calculating real coord between cube and mycobot detect.set_params( (detect.sum_x1+detect.sum_x2)/20.0, (detect.sum_y1+detect.sum_y2)/20.0, abs(detect.sum_x1-detect.sum_x2)/10.0 + abs(detect.sum_y1-detect.sum_y2)/10.0 ) print ("ok") continue # get detect result detect_result = detect.color_detect(frame) if detect_result is None: cv2.imshow("figure", frame) continue else: x, y = detect_result # calculate real coord between cube and mycobot real_x, real_y = detect.get_position(x, y) if num == 20: detect.pub_marker(real_sx/20.0/1000.0, real_sy/20.0/1000.0) detect.decide_move(real_sx/20.0, real_sy/20.0, detect.color) num = real_sx = real_sy = 0 else: num += 1 real_sy += real_y real_sx += real_x
The AI Kit project is open source and can be found on GitHub.
After comparing the video, content, and code of the program, it appears that the 3 robotic arms have the same framework and only need minor modifications to the data to run successfully.
There are roughly two main differences between these 3 robotic arms.
One is comparing the 4- and 6-axis robotic arms in terms of their practical differences in use (comparing myPalletizer to mechArm/myCobot).
Let's look at a comparison between a 4-axis robotic arm and a 6-axis robotic arm.
From the video, we can see that both the 4-axis and 6-axis robotic arms have a sufficient range of motion in the AI Kit's work area. The main difference between them is that myPalletizer has a simple and quick start process with only 4 joints in motion, allowing it to efficiently and steadily perform tasks, while myCobot requires 6 joints, two more than myPalletizer, resulting in more calculations in the program and a longer start time (in small scenarios).
In summary, when the scene is fixed, we can consider the working range of the robotic arm as the first priority when choosing a robotic arm. Among the robotic arms that meet the working range, efficiency and stability will be necessary conditions. If there is an industrial scene similar to our AI Kit, a 4-axis robotic arm will be the first choice. Of course, a 6-axis robotic arm can operate in a larger space and can perform more complex movements. They can rotate in space, while a 4-axis robotic arm cannot do this. Therefore, 6-axis robotic arms are generally more suitable for industrial applications that require precise operation and complex movement.
The second thing is that both are 6-axis robotic arms, and their main difference is the structure. mechArm is a centralized symmetrical structure robotic arm, and myCobot is a UR structure collaborative robotic arm. We can compare the differences between these two structures in actual application scenarios.
Here are the specifications of the two robotic arms.
The difference in structure between these two leads to a difference in their range of motion. Taking mechArm as an example, the centrally symmetrical structure of the robotic arm is composed of 3 pairs of opposing joints, with the movement direction of each pair of joints being opposite. This type of robotic arm has good balance and can offset the torque between joints, keeping the arm stable.
Shown in the video, mechArm is also relatively stable in operation.
You may now question, is myCobot not useful then? Of course not, the UR structure robot arm is more flexible and can achieve a larger range of motion, suitable for larger application scenarios. myCobot's more important point is that it is a collaborative robot arm, it has good human-robot interaction ability and can collaborate with humans for work. 6-axis collaborative robot arms are usually used in logistics and assembly work on production lines, as well as in medical, research, and education fields.
As stated at the beginning, the difference between these 3 robotic arms included in the AI Kit is essentially how to choose a suitable robotic arm to use. If you are choosing a robotic arm for a specific application, you will need to take into consideration factors such as the working radius of the arm, the environment in which it will be used, and the load capacity of the arm.
If you are looking to learn about robotic arm technology, you can choose a mainstream robotic arm currently available on the market to learn from. MyPalletizer is designed based on a palletizing robotic arm, mainly used for palletizing and handling goods on pallets. mechArm is designed based on a mainstream industrial robotic arm, which has a special structure that keeps the arm stable during operation. myCobot is designed based on a collaborative robotic arm, which is a popular arm structure in recent years, capable of working with humans and providing human strength and precision.
That's all for this post, if you like this post, please leave us a comment and a like!
We have published an article detailing the differences between mechArm and myCobot.Please click on the link if you are interested in learning more.
RE: myCobot 280-Ard conveyor control in an industrial simulation
Not quite correct, it is the computer that is needed to transfer the data and send the commands to mycobot.
myCobot 280-Ard conveyor control in an industrial simulation
This article was created by "Cang Hai Xiao" and is reproduced with the author's permission.
This article starts with a small example of experiencing a conveyor belt in a complete industrial scene.
A small simulated industrial application was made with myCobot 280-Arduino Python API and load cells. I use the toy conveyor belt to transfer the parts to the weighing pan and use M5Stack Basic as the host to build an electronic scale to weigh the parts coming off the conveyor belt and then transmit the weight information in the weighing pan to the PC through UART.
Using the PC as the host computer, a simple GUI was written in python to display the weights in real time and allow the user to modify the weighing values.
Here is a detailed video of how this project works.
The following is the detailed process of the project.
Burn Mega 2560 and ATOM firmware.(Check the Gitbook for details)
Write a weighing program and upload the program to M5Stack Basic.
The serial port is initialized and the connection mode is set. Establishing communication between the PC and M5stack Basic
Calculating ratio factors.The data read from the sensor using the M5Stack Basic are initial data and need to be calibrated with a 100g weight and a 200g weight to calculate the conversion factor into "grams". In this case we have calculated a ratio factor of -7.81.
Calculate the readings from the load cell and the conversion factor, and display as the weighing value.
Use UART1 to send the data in every 20ms. It is recommended to do an average or median filter to reduce the shock during the drop of the part from the hopper.
This is the event corresponding to the zero button, 100ms for button de-jitter
This is a simple electronic scale program written for UIFlow. It can also be sent to a PC via uart1 via TTL-USB and written to M5Stack Basic with a single click on Download. I have used the offline version of UIFlow for ease of connection and debugging.
Use myBlockly to debug the parameters for the press (drop arm) and release (lift arm) actions
Writing PC programs and installing pymyCobot.
(1) First, write the GUI interface by the Tkinter library. We can set the threshold for the weighing control. For example, in this commissioning, I set 5g.
(2) Importing pymycobot
(3) A callback to the OK button first allows the myCobot drop arm to power on the conveyor, the conveyor starts working, and the electronic scale monitors the weight in real time. The loading() function is responsible for reading the serial weighing data. Then determine if the threshold is reached and control the myCobot lift arm if the threshold is reached.
#============ # Function： # 1.Setting of the weighing values, displayed in the GUI. # 2.Use the processing bar to show the progress of the weighing # 3.When the target value of 99% is reached, a command is given to # myCobot to perform a stop operation. # date: 2022-11-10 # version: 0.2 # Joint Adjustment：Combined with the myCobot button and release #action #============ from tkinter import * import tkinter.ttk import serial import time from pymyCobot.myCobot import myCobot from pymycobot.genre import Coord #====Global variable initialisation global val #Measured weight val=0.0 global iset #Scale factor, based on set values,setvalue/100 iset=5/5 global c_set #Input box to form weighing judgement criteria c_set=0.0 global action_flag action_flag=False # Set download maximum maxbyte = 100 #======myCobot initialization mc = myCobot('COM23',115200) mc.power_off() time.sleep(2) mc.power_on() time.sleep(2) print('is power on?') print(mc.is_power_on()) time.sleep(2) mc.send_angles([95.97,(-46.4),(-133.3),94.3,(-0.9),15.64],50) #Arm lift time.sleep(2) #================== #Serial port initialization try: arduino = serial.Serial("COM25", 115200 , timeout=1) except: print("Port connection failed") ReadyToStart = True #Show processing bar function def show(): mc.send_angles([95.6,(-67.2),(-130.3),101.9,(-2.2),23.11],50) #down # Set the current value of the progress bar progressbarOne['value'] = 0 # Set the maximum value of the progress bar progressbarOne['maximum'] = maxbyte # Calling the loading method loading() #Process functions def loading(): global byte global val global action_flag c_set=setvalue.get() iset=100/float(c_set) #Calculation of scaling systems byte = arduino.readline().decode('utf-8') try: if len(byte) !=0 : val= byte else: pass except: pass if (1-(float(c_set)-float(val))/float(c_set))>=0.99 and action_flag==False: #Control myCobot movement when the remaining value is less than 5% print("triger") mc.send_angles([95.97,(-46.4),(-133.3),94.3,(-0.9),15.64],50) #up action_flag=True #Make sure you only act once, unless RESET # Set the progress of the processing bar pointer progressbarOne['value'] =(1-(float(c_set)-float(val))/float(c_set))*100 #float(val)*iset #Display of implementation weighing data in label4 strvar.set(str(float(val))) # Call the loading method again after 100ms progressbarOne.after(20, loading) #reset button callback function def reset_click(): global action_flag action_flag=False #Reset flag word to prepare for the next action pass #Reset flag word to prepare for the next action def ok_click(): show() pass #UI design=========== #Main window win = tkinter.Tk() win.title("mycobot") #Create a frame form object frame = tkinter.Frame (win, borderwidth=2, width=450, height=250) # Fill the form horizontally and vertically frame. pack () #Create "Position 1" Label1 = tkinter.Label ( frame, text="Set value (g)") # Using place, set the position of the first label from the upper left corner of the form (40,40) and its size (width, height) # Note that the (x, y) position coordinates here refer to the position of the upper left corner of the label (absolute positioning is performed with the upper left corner of NW, and the default is NW) Label1.place (x=35,y=15, width=80, height=30) # set data input setvalue setvalue = tkinter.Entry (frame, text="position2",fg='blue',font=("微软雅黑",16)) #,bg='purple',fg='white') #Use the upper right corner for absolute positioning, and the position is (166, 15) away from the upper left corner of the form setvalue.place(x=166,y=15, width=60, height=30) # set tab 3 Label3 = tkinter.Label (frame, text="Real Value (g)") #,bg='green',fg='white') # Set the horizontal starting position to 0.6 times the horizontal distance of the form, the absolute vertical distance is 80, and the size is 60, 30 Label3.place(x=35,y=80, width=80, height=30) # Set label 4, place the measured weight value, the default is 0.0g strvar = StringVar() Label4 = tkinter.Label (frame, textvariable=strvar,text="0.0",fg='green',font=("微软雅黑",16)) #,bg='gray',fg='white') # Set the horizontal starting position to 0.01 times the horizontal distance of the form, the absolute vertical distance to 80, set the height to 0.5 times the form height ratio, and set the width to 80 Label4.place(x=166,y=80,height=30,width=60) progressbarOne = tkinter.ttk.Progressbar(win, length=300, mode='determinate') progressbarOne.place(x=66,y=156) # Call a function using a button control resetbutton = tkinter.Button(win, text="Reset", width=15, height=2,command=reset_click).pack(side = 'left',padx = 80,pady = 30) # Call a function using a button control okbutton = tkinter.Button(win, text="OK", width=15, height=2,command=show).pack(side = 'left', padx = 20,pady = 30) #start event loop win. mainloop()
The program is debugged step by step:
（1） Debug the electronic scale to ensure that the weighing is correct, and use weights for calibration. Make sure the datas are correct.
（2） Connect myCobot to the conveyor belt, and install a simple button at the end of myCobot, which can trigger the power supply of the conveyor belt when the arm is lowered.
（3） Joint debugging. Set the threshold in the GUI, trigger myCobot to drop the arm, and then the conveyor belt starts to run (parts are transported and fall into the hopper, weighed in real time), and trigger the myCobot to lift the arm after reaching the threshold (5g).
This is a simulated industrial application to demonstrate the control function of myCobot 280 Arduino. We transmit the weighing data to the PC through the sensor plus M5Stack Basic and indirectly feedback on the running status of the conveyor belt. Receive the weighing data to monitor the transportation of parts on the conveyor belt. When the threshold is reached, the myCobot will trigger the arm-lifting action.
The program is elementary, and the host computer only has 150 lines. The difficulty is minimal and suitable for beginners to get started. Understanding, adjusting, and acquiring the robotic arm's electrical, mechanical, and parameters.
Thanks to Elephant Robotics' Drift Bottle Project for the myCobot 280 Arduino.
RE: A four-axis robotic arm ideal for industrial education |myPalletizer M5Stack-esp32
I'm very sorry about that. This forum does not allow GIFs.
Watch it on hackster if you're interested in watching it!
A four-axis robotic arm ideal for industrial education |myPalletizer M5Stack-esp32
What is the 4-axis robotic arm?
In the era of Industry 4.0, where information technology is being used to promote industrial change, robotic arms are essential in industry transformation. Automated robotic arms can reduce staff labor and increase productivity using automation technology combined with artificial intelligence, voice, and vision recognition. Robotic arms are now very relevant to our lives. Most robotic arms are built like human hands to perform more tasks such as grasping, pressing, and placing. The axes of a robotic arm represent degrees of freedom and independent movement, and most robotic arms have between two and seven axes. Here I will show you a four-axis palletizing robotic arm that is suitable for introductory learning.
What is the palletizing robotic arm?
Palletizing means neatly stacking items. Palletizing robotic arms grip, transfer, and stack items according to a fixed process.
Which kind of robotic arm is more suitable? A 4-axis robotic arm? Or a 6-axis robotic arm?
Let's look at the table.
The 4-axis palletizing robotic arm can only move horizontally up and down, backward and forwards, left and right, with the end fixed towards the bottom. This is a significant limitation in terms of application and is mainly used in high-speed pick-and-place scenarios. Six-axis robotic arms are suitable for a wide range of designs and can move without dead space to reach any position within the field. We will mainly look at the four-axis palletizing robotic arm.
A video was made about the movement of two types of robotic arms.
myPalletizer 260 M5Stack
The myPalletizer robotic arm shown in the video, with M5Stack-ESP32 as the central control, is a fully wrapped lightweight 4-axis palletizing robotic arm with an overall finless design, small and compact, and easy to carry. The weight of myPalletizer is 960g, the payload is 250g, and the working radius is 260mm. I think it is designed for individual makers and educational use. With the multiple extension interfaces, we can learn machine vision with the AI Kit.
Why would we recommend this arm as an introductory 4-axis palletizing robotic arm?
There are many four-axis (4DOF) robotic arms in industry, the mainstream being represented by palletizing robotic arms. Compared to 6-axis robotic arms, myPalletizer has a more straightforward structure, fewer joints, less stretching, faster reaction times, and faster-operating efficiency and is better to use than 6-axis robotic arms. It would be quite an excellent choice with palletizing robotic arms. Let's take a look at the myPalletizer 260-M5Stack parameter.
The suitability of a robotic arm for learning requires several conditions.
The robotic arm must support multiple functions.
If this robotic arm has a mainstream structure, there will be many models of industrial robotic arms to provide a reference value.
Supporting documentation for the robotic arm is available and provides the user with basic operating instructions.
What can we learn with myPalletizer 260?
When programming the robotic arm, we will learn about forward and inverse kinematics, DH model kinematics, Cartesian coordinate systems, motors and servos, motion mechanics, programming, machine vision, etc. Here is a brief introduction to what DH model kinematics is.
First, let's talk about forward kinematics and inverse kinematics.
Determine the position and pose of the end effector given the values of the robot joint variables.
The values of the robot joint variables are determined according to the given position and attitude of the end effector.
DH Model Kinematics:
Mainly by constraining the position of the joint coordinate system, the transformation between the joint coordinate system and the coordinate system is disassembled into 4 steps, each step has only one variable/constant, thus reducing the difficulty of solving the inverse kinematics of the manipulator.
With a robotic arm, we can learn more about robotic armics.
Open Source Information
Elephant Robotics provides relevant information about myPalletizer in Gitbook. There are basic operation tutorials in mainstream programming languages, such as programming in python language, and a series of detailed introductions from the installation of the environment to the control of the robotic arm, providing beginners with a quick way to build and use the robotic arm.
More open source code on GitHub.
Artificial Intelligence Kit
We also provide an artificial intelligence kit, a robotic arm is not capable of human work, and we also need a pair of eyes (cameras) to recognize, the combination of the two can replace manual work. A camera just displays the picture it shoots, we need to program it to realize the method of color and object recognition. We used OpenCV and python to recognize and grab the color of wood blocks and recognize and grab objects.
Let's see how it works.
The Artificial Intelligence Kit is designed to give us a better understanding of machine vision and machine learning. OpenCV is a powerful machine vision algorithm. If you want to learn more about the code, you can look up the project on GitHub.
myPalletizer is an excellent robotic arm for those just starting! I hope this article will help you choose your own robotic arm. If you still want to know more, feel free to comment below. If you enjoyed this article, please give us your support, and like us, your like is our motivation to update!
RE: My first try with the little six-axis robotic arm| mechArm 270-M5Stack
The m5stack-basic esp32 is mainly used for the Internet of Things. The robotic arm needs a bridge to connect to the computer, where the m5stack-basic esp32 development board is used, which also powers the arm. We have set up some microPython in the m5 to facilitate using some functions.
RE: My first try with the little six-axis robotic arm| mechArm 270-M5Stack
It looks great.Looking forward to your subsequent and more exciting projects about mechArm.