OS Delta robotic Arm project as possible Ally?

Few days ago, I came across a proprietary farming robot roughly similar to Acorn which used a Delta robotic arm to carry out its operations on the field.
As such I started to wander if perhaps there was some opensource project working on such a kind of arm, as it seemed to me that it might have been a brilliant starting point in the effort to make Acorn operational.

Today I came across this: https://www.deltaxrobot.com and Delta X - Opensource Delta Robot Kit | Hackaday.io

I hope you will find it useful!

3 Likes

You might want to look at the Haddington Dynamics robot arm Dexter. It is open source, but you can also buy kits and completed units. This is a very advanced arm for the price. Open Source ā€” HADDINGTON DYNAMICS

2 Likes

Thanks, I think this Delta is interesting. I might try and do a BLDC version. If it was ruggedised & combined with something like GitHub - samuk/OpenWeedLocator: An open-source, low-cost, image-based weed detection device for fallow scenarios. It could become a general-purpose weeding/sowing actuator that could be used in Acorn, or other similar robots.

1 Like
1 Like

Actually this looks better;

1 Like

I think we will use a delta arm once we get more in to tooling. I like brushless motors, and I recently designed a four axis brushless arm as a personal project using 3D printed planetary gears. Iā€™m thinking Iā€™d like to turn the first axis of that arm in to the joint for a delta. One benefit of using our own design is everything would be in the same CAD platform, and we could adjust the motor and gear sizes as needed to ensure we have the strength we need. But feel free to keep dropping possible arms here, and as we move towards tool implementations I will take a look!

1 Like

I was thinking about this a bit today.

Currently thinking this looks useful: GitHub - jkirsons/stealth-controller as it already has ROS2 code.

Run it through BrushlessServoController/08-Reducers/5008 cycloidal at main Ā· pat92fr/BrushlessServoController Ā· GitHub

Then itā€™s just a case of hacking this Nindamani-the-weed-removal-robot/stepper_control at master Ā· AutoRoboCulture/Nindamani-the-weed-removal-robot Ā· GitHub to use the ā€˜stealth controllerā€™

Train it on this GitHub - cwfid/dataset: Crop/Weed Field Image Dataset

Replace the ā€˜gripperā€™ with a mini strimmer/weed whacker line and you might have something moderately useful.

1 Like

Generally, moving the arm is the easy part. And for me, since I make planetary gearboxes all the time for fun, the gearing part is easy too. But what seems extremely hard is not just identifying things in the image (which is definitely hard), but also getting the hand/eye coordination working so that you know where in 3D space to move the manipulator.

I feel like we will need to use structure from motion to actually calculate the full 3D geometry of the scene underneath the camera, so that as the ground undulates up and down underneath the vehicle, you can always localize objects in the image in 3D. Perhaps it will be enough to use stereo for this. But the best systems I have seen take advantage of the consistency between multiple frames to come up with robust estimates as the vehicle moves.

This can potentially get pretty complex. My plan is to build the parts I know how to build, which would be the delta arm and a good stereo camera system that can handle the harsh environment of the farm, as well as a high precision GPS stamping system that records the precise RTK GPS position of every shutter activation, and then work with other users and the crowd to build a vision architecture that really works specifically for this application. And that means not just pulling in an existing dataset, but also building a labeling and training pipeline so that the network is trained on real images it is likely to see, and so that we can expand it to work on specific problems we have in front of us.

That is, from where I am standing, extremely complicated. But in a lot of ways it is very similar to Teslaā€™s architecture for self driving cars, and my hope is that there are people who have pursued this kind of thing for an advanced degree, that can help fill in those gaps.

We can also start with something simpler and make it more advanced over time.

All that said, I am saying this presuming I will design much of the system. Definitely what we want to do is get vehicle kits in other peopleā€™s hands (which we are working on) so that other people can have a go at it using whatever methods make sense for them, and then we can all share what weā€™re doing and proceed using the best ideas. So if some existing reducer designs and controllers work well for you, have at it! I am sure you can learn plenty of valuable stuff.

As far as doing things the complex way, hereā€™s some talks about the Tesla architecture I have watched before, that give an idea of what a vision-guided robotics application might want to do. There is actually a lot of similarity between steering a car on a busy road and locating and killing weeds, though with some notable differences!

A community approach would allow experimentation with a range of gearboxes such as the cyclodial or even belt-driven actuators

Isnā€™t this where the Nindami stuff is useful? They already have functioning weed recognition > Delta Kinematics functioning. It may not be optimised, but there is a functional prototype there we can iterate on.

I was imagining you could just use some TOF sensors?

Would the open-hardware OAK-D ā€“ Luxonis work?

Yes I think this approach sounds good. It seems like a problem that people will be interested in & the hardware should be affordable for quite a wide range of groups/people.

Defining the problem clearly and having a reference hardware/software platform would help turn this into a collaborative project IMHO.

Iā€™d personally start from the Nindami, as it already exists, integrates with ROS2, and works to some degree. Iā€™m pretty sure 3x of the open hardware SimpleFoc stealth controller would be cheaper than 2x of the closed hardware Odrives for this. So that might be a good component for a reference platform? Iā€™d have a preference for using open hardware wherever possible, although weā€™d probably end up with a Jetson XX for the compute?

If we could get a European company interested in this we could even apply for this I have a UK-based company that could contribute, but it would be too small to be the lead partner.

I wonder if the GPS stamping is really core to this? Could that be handled later? It seems like something that shouldnā€™t be too hard to accomplish in ROS2?

Added a few notes/concept on a FOSS delta module


Academic spinoff: https://ecoterrabot.com/ They claim they will Open source

Wrote this paper: Agriculture | Free Full-Text | Plant and Weed Identifier Robot as an Agroecological Tool Using Artificial Neural Networks for Image Identification

Looks like they are involved in Ant Robotics

Ant robotics describe themselves as ā€˜open sourceā€™ but I canā€™t yet find any code or schematics anywhere.

Hi Sam! Sorry for the delay I wanted to think about the best answers. I will try to be brief as I need to head out shortly, but I thought I would jump in and respond a bit.

Regarding Nindami: It is a nice piece of software and looks very useful for someone getting started. My issue is that at itā€™s heart, it is running semantic segmentation on individual images. There are going to be limits to how effective that can be. Note for example that the sample images show clean bare dirt with clear images of plants all standing on their own. When you have things like cover crops underneath the plants, computer vision gets much harder. In that case you need to include things like temporal consistency, which basically allow you to get more information to really narrow things down.

You can also look at the 3D structure of plants. There is something called Structure From Motion which allows you to take multiple images and reconstruct the 3D geometry of the scene. This would allow you to potentially understand the full 3D geometry of the scene and do plant identification not just in a 2D image but in full 3D. Only plants move, so we would need a network akin to those that do structure from motion on moving people. Understanding the 3D structure of the scene would allow a robot arm to carefully pull a week right next to a crop plant, something that would be very difficult to do using semantic segmentation alone.

So the question becomes, should we start with the basic system and expand it, or go straight to the advanced system? Probably we should do both. But I want to make sure I am building a system that is capable of doing the more advanced things, so we build it once and then spend time developing the algorithm. That is why I worry about things like ultra precise GPS stamps, which will be important for training structure from motion algorithms. One of the most computationally intensive tasks about structure from motion is resolving the relative camera poses, but if we build our GPS system right it can eliminate a lot of computation there.

The Tesla vision system I talk about is what they call a ā€œhydraā€ architecture. They provide a lot of detail about how it works, and I think we need to adopt something that, while much simpler than theirs, has some of the same attributes. And that is not going to come from grabbing an off the shelf semantic segmentation system.

I also do want to highlight that I am basically not ready to work directly on the architecture for the vision system, as we are still finalizing the basic hardware and working on a robust camera system.

As far as motor controllers - it wonā€™t be hard for someone to integrate with other motor controllers. But we already have really nice integration of Odrive and have done a lot of testing. Three of those controllers might be cheaper than two odrives, but with three odrives you can run two arms at once, which is more where I would like to go. That said we can support both. One question I have is whether that controller is available for purchase. We are already going to go in to production at some point on our Acorn PCBs and we could expand to making our own motor controllers, but we want to limit how much work we take on at once. I know Oskar at Odrive personally and I really support all their work, despite my personal wishes that they went fully open hardware. I have no issue continuing to use their controllers for now, though I share your desire to ultimate use open controllers.

You asked about GPS stamping in ROS. We could do it in the rapsberry pi, whether we use ROS or just our python stack. But that adds extra latency. I want to use a stereo or quad camera system and the cameras need a trigger signal to activate their shutters at the same time. So if we have a microcontroller doing 100HZ sensor fusion for accurate GPS positioning, we can have the microcontroller also running the camera trigger signal and produce extremely accurate GPS stamps. The more accurate they are, the less compute is needed to train a structure from motion system.

It is important to me to be able to produce high quality datasets that researchers can use to help solve the tough problems. So for example I want a neural network to help with 3D structure from motion of plants. I have not been able to find any dataset for that. And I personally am not capable of solving that problem from a machine learning perspective. But I CAN produce highly accurate GPS stamped images and point the cameras at plants all day long. So we produce that dataset, and work with university researchers to produce new techniques for structure from motion on plants. They produce research for their work and help solve an extremely thorny problem for us. At least, that is my thinking.

Okay I should run but thereā€™s a few shotgun answers. I can come back later and take a look at some of the videos you shared. I have seen the IGUS delta before but itā€™s something like $7k and I know we can make a delta for less than that.

Hi thanks for the reply

Yes, this does sound like a harder problem, I donā€™t personally use any cover crops in my growing.

In the first instance, I think Iā€™d play with Lasers rather than mechanical.

Not yet, weā€™re still at testing stage we do have a plan in place to sell via makerfabs once its functional

A significant issue Iā€™ve run into is that the Keops kinematics arenā€™t widely available. Iā€™ve got some Matlab code from this paper but I need to investigate how/ if thatā€™s usable.

Are you looking at the Luxonis open hardware stuff for that?

Iā€™ll Google some of that, plenty to learn on the RTK stuff

Yeah Iā€™ll have a think about that.

Yeah, absolutely. Iā€™ll look into the Keops kinematics stuff more when I get a minute.

One thought I have had is that if the affordable lasers do work, then perhaps I donā€™t need Z at all, X can be done with the robot motion, so perhaps a simple linear actuator with the laser is all I need to solve my specific problem of weeding young salad crops as they get established.

It might be that I do that, in parallel to the more sophisticated Delta stuff. If I solve my immediate problem I free up ~100 hours a year of my time not spent hoeing and hand weeding.

I revisited this a bit. The Keops kinematics made things complicated.

So considering a more conventional delta using these parts

The kinematics would run on some microcontroller then communicate over CANBUS with a pair of your Twisted fields controllers.