Computation components

Along with suggestions of Jetson Xavier might also be the Oak-D, which also includes stereo cameras and multiple spatial AI capabilities. OpenVINO compatible.

2 Likes

Worth a mention, the Intel Compute Stick added to an RPi might be an alternative, lower cost consideration, or simply a comparison point;

https://towardsdatascience.com/raspberry-pi-and-openvino-2dd11c3c88d9

1 Like

Thanks WIll!

These integrated cameras are always interesting. So far I tend to be biased towards connecting traditional cameras to general purpose compute, so there is more flexibility in algorithm development. For example I have a couple of these in the office which I want to test as navigation cameras:
https://www.e-consystems.com/nvidia-cameras/jetson-agx-xavier-cameras/stereo-camera.asp

That comes with sample apps for computing depth, though visual odometry would be nice too.

Field of view matters too. Acorn can drive in any direction as it has four wheel steering, so getting 360 degree coverage would be valuable. That could be achieved with four fish eye cameras at the corners. We could perhaps use these:
https://www.e-consystems.com/industrial-cameras/ar0234-usb3-global-shutter-camera.asp

More and more I am seeing good results estimating depth from monocular cameras using structure from motion techniques:

But the advantage of integrated solutions is that they work out of the box. Iā€™m not sure how much compute would be needed to run all the algorithms I want to run, nor how much integration work is required. These are tasks we might be able to get help with from some eager researchers, once the camera systems are installed. I can easily collect datasets at least.

And this is all for the navigation cameras. For the crop-facing cameras, I think we must have general purpose compute and I think it will need to be very powerful. Just running semantic segmentation on 1000x1000 pixel images takes something like the Jetson Xavier at least.

1 Like

Do you have any data you can collect at this time, or have already collected? Some of the ā€œPapers with Codeā€ reference supervised learning to make this go more quickly, which could infer in this case a carefully calibrated crop mockup set that it would traverse over from which to ā€˜learnā€™ depth perception.

Also found this paper, which uses an RPi;

1 Like

I am not working on any navigation camera solution right now, because GPS is pretty great so far. I am more focused on the crop-facing camera system. I guess what I meant was that if we want to develop a nav-cam solution, then collecting data isnā€™t too hard even if I wasnā€™t equipped to solve the machine learning part of it. (Maybe I would be, but mostly I take existing research and deploy it).

That said, my Rover robot needs to solve the same outdoor vision problem and it already has a four camera system on it. Rover is undergoing some mechanical maintenance right now, but I am eager to start collecting new datasets for it once I get it going again. If someone was interested in the vision based navigation problem, thatā€™s basically what I designed Rover for and I will be making its datasets public.

As far as the crop camera, I have some preliminary images but the idea of using fisheye lenses did not work out great due to insufficient depth of field, so I can see the entire underside of acorn with both of the two 13mp cameras underneath it, but not all of the field of view is in focus. I will share some images anyway but I think it needs more work before we can start producing useful datasets.

As a stopgap we could also use some gopro cameras to start collecting initial data while I mess with optics for the intended vision system. Iā€™ll have to throw our GoPro under there and see how workable that data is. I donā€™t think it would be useful for realtime processing (as far as I know) but it would be fine for post processing and testing a training system.

Hi @taylor, with GPS working well, whatā€™s the ā€˜real needā€™ for Cameras?

Maybe worth discussing and capturing User Stories in Requirements Backlog ? Iā€™d be happy to help.

Camera Needs:

  1. Navigation: adequete with GPS.
  2. Perimeter Safety: safely and immediately stop/alert/resume, for people, animals, large objects.
  3. Perimeter Obstacles: navigation avoidance of; protruding or fallen branches, logs, wild/farm animals, people, limbs.
  4. Survellance & Security: User controlled PTZ (pan/tilt/zoom) cameras for surveilance and security of platform, its paths, implements, surroundings, plant rows, fencelines, gates, intruders, pond/swale water levels etc. Inspection of pathway ahead/behind, trees/plants/fields, time-lapse. Option for easy addition of PoE/WiFi PTZ cameras like:- Reolink solar-battery (Argus PT | Smart 2K HD Pan & Tilt Battery Security Camera), or PoE RLC-823A | 4K PoE PTZ Security Camera with Auto-Tracking
  5. Crop Inspection Cameras: crop facing for identification, plant and tree orchard health, growth rate, maturity, ripeness, insects/birds/wildlife, animal damage, disease.
  6. Soil Microscope camera: automated soil sampling and microscope photos of soil life present.

Some use cases may be married with PIR, Radar, Laser or Lidar to determine nature of objects and action to take.

Iā€™d be particularly interested myself in solving the last one on Soil Microscope Camera. This has tremendous value to the Regenerative and Permaculture Community.

Hi @WillStewart, @taylor,

With the aim of processing/storage locally offline and connecting as and when in rangeā€¦ could non-critical compute be offloaded? To take advantage of ample:

  • Compute Processing,
  • Storage,
  • Phone camera function,
  • Platform Control and Information panel touchscreen
  • optionally 4G/5G internet or bluetooth peripheral connectivity

ā€¦be offloaded over the WiFi/hotspot to an upgradeable android phone app housed locally onboard? Thus offering modular upgrade of the phone device by the user? So only critical core platform functions are retained.

Non critical or heavy functions could be dumped to the phone for processing. Perhaps a landscape oriented, glass pannel box. Large enough for either tablets/phones with USB-C connector/charging, could house the phone as secondary compute. Mounted either at front facing forward, or to left side facing forward (tractor mounting side standard) or towards rear, near a stop button for safety?

Enabling the platform to do what it does best, intelligently and safely move about with payloads performing targeted Actions that take the work out of crisis farming, saving time and building good soil/life pre-emptively!

I believe, weeds, pests and diseases affecting yields, are not going to be a problem if we focus on the soil life, after Permaculture mainframe design is implemented. Cameras and farmers will be less on sentry duty and more on time-lapse trend forecasting of soil improvement conditions and weather. Out of the plant business and more into naturally rearing soil livestock synergies.

1 Like

Interesting ideas. Thereā€™s different use cases with any combination of the following:

a) One or more robots - single robots canā€™t share compute.
b) Near or far away from farm buildings/networking. Compute can be offloaded to somewhere.
c) One or more fields being managed. More field increases the likelihood of low bandwidth.
d) power budget - robot is tilling, mowing versus slowly moving forward inspecting or any combination thereof.
e) Modern infrastructure or less infrastructure. California with 5G bandwidth or the middle of the Yukon with little or expensive bandwidth.
f) Till versus no-till resulting in high power demands. Although no till doesnā€™t always mean low power demands. Sometimes no till means moving tons of wood chips to cover or mulch the area.

The answers for the computational power are variable depending upon where the robot sits in those use cases. Iā€™m not sure we can really answer these questions without understanding so many variables created by these various use cases.

Conventional, non robotic, agriculture equipment is often limited by power. I suspect these robots will always be too. The obvious answer is to just add high power diesel engines. But that makes the robot fit a completely different niche. Thereā€™s going to be lots of tradeoffs made in the Acorn. The question is what should the product roadmap of the Acorn be, that way we can start to answer these questions.

Even comparing the Acorn with the Tesla is a bad comparison due to the much lower computation requirements of the Tesla versus the Acorn. The Acorn is needing to weed, never weeding the wrong plant, at a high rate. Generally all the Tesla needs to do is to keep the speed in a narrow range and between the road lines. Plus a Tesla can always panic and give control to the human. Hmm, I guess the Acorn could too, but that could mean the Acorn may ā€˜stopā€™ randomly.

So many possibilities. So many.