The fading light of technology?

If we want to look at the real issue with the planet it is ultimately humans. We have effectively terraformed the planet over the last couple of hundred years. The problem may lie in humans over complicate many things. Solutions are often simple. Abating CO2 emissions could be partially achieved by planting more trees. Plastic could be better recycled, people could use less plastic, or we could make more biodegradable “plastic”.  People could eat more sustainably. We could live without most of the garbage we manufacture. Years ago companies built products to last, now we have become a throw away society. A case in point are cameras. With analog cameras they lasted decades – even now cameras that were built in the 1930s still work. The downside of analog photography is of course the chemicals use to develop photographs. How many digital cameras do we all have? I probably have 10-12 sitting around, the majority obsolete technology. Batteries run their course and wear out, or the number of shutter activations is maxed out. Or the technology just becomes dated, due to lack of megapixels.

We may now at a nexus – how much more can technology improve? It already pervades too much of our lives, and I would argue that improvements in many technologies have reached a plateau. There is a reason vinyl records have made a comeback, and even analog cameras (which is why their resale prices keep rising!). I won’t go into the vagrancies of AI, but that too offers much and will likely provide little. Analog cameras have reappeared because they provide photographs with character that digital lacks (partially this is because the younger generation have grown up using vintage filters in apps like Instagram). The next evolution might be the creation of a digital film medium for analog cameras – a reusable film so to speak, to combine the aesthetics of film with the ease of digital processing (like this product).

Everything that is old is new again. Or maybe everything that is new is not necessarily better. This was the case with e-readers which were suppose to replace paper books… but that didn’t happen. The truth is we don’t need so much technology. Even the iPhone 8 I got last week… almost the same as the iPhone 6 it replaced. iPhone X… what more would it do for me? Except for the convenience of having a camera, and notepad where ever I go, and being able to Google things… it doesn’t do much for me. We need to cut back on technology and take a closer look at the way we use to live more… maybe get a better understanding of nature, because at the end of the day, you can’t eat technology.

Computer vision, AI and the art of fruit picking

Lots of companies are jumping on the bandwagon of AI. The latest things I have heard about is using computer vision and AI to create robot fruit and vegetable pickers. I’ve discussed this before when talking about auto-mushroom pickers and image segmentation. This application of course is nothing new, the Japanese have been working on these systems for decades. Not for fun either, they have a genuine need with a decreasing population, few of which want to pick fruit or vegetables. The thing with this is that there is no one-algorithm-fits-all mantra. Designing an algorithm to pick oranges from trees it not terribly difficult. That’s largely because oranges, are well, orange, and the surrounding foliage is green. Oranges are also fairly spherical. It’s even possible to build some sort of robot to pick the fruit (this can be super challenging for some fruit because of how soft it is). But that’s a best case scenario.

Apples are challenging because they don’t come in a uniform colour. Then there is also the problem with foliage. It is not an easy problem, because all trees are different, and variegated apples are all different in colour as well.  Designing a fruit-by-fruit based robot which is capable of locating and characterizing fruits on a tree is challenging, largely because it must also detect obstacles for a collision free process (drone picking anyone?). Many researchers have investigated this topic, and it remains an open challenge, largely because humans can still pick fruit faster than any machine, unless it is in the closed environment of a greenhouse, where environmental factors can be controlled. Some of the problems include the occlusion of target fruit, by foliage, branches, and other fruits in an extremely unstructured environment. In some cases not all fruit will be “ripe” at the same time, leading to the system having to determine whether a fruit is optimal for picking. In addition the robot picker may also have to determine the quality of a piece of produce, as damaged fruit is not optimal.

The scope for these algorithms is widespread, and encompasses all manner of fruit and vegetables, e.g.  sweet peppers, oranges, apples, and even strawberries. The computer vision algorithms are often quite simple, because they have to work in real time, and obviously there is a place for machine learning, because they are repetitive tasks. Many detection and localization algorithms work on the basis of discriminating between foliage which is usually green, and fruit, which in the case of strawberries is distinctly red. It’s not hard to use colour space to localize where the fruit is (and determine ripeness), and combined with some shape analysis, decide how to go about robotically picking it. The problem lies in the fact that the experiments on these systems often happen in optimal conditions with controlled lighting in a laboratory environment. Great for strawberries grown in a greenhouse, not so great for field strawberries (which are likely more economical to grow). Strawberries are one case too… how does a robot deal with the frailty of raspberries growing in a dense shrub?

Here is a very simple example of a series of strawberries on a plant.

There are a number of ripe strawberries, but some of them are naturally occluded by the plant. There are also a bunch of underripe strawberries. Then if we perform a simple segmentation, using Lab colour space we can extract the red regions of the image (threshold criteria: L: 73-196, a: 160-194, b: 133-177). This extracts most of the red regions in the image, representing most strawberries, and is really the easy part. There are of course a couple of berries covered by leaves, or affected by shadows from the foliage (making the red darker and beyond the constraints of the simple algorithm). The harder part is determining which ones should be picked, which ones are ripe. Are the partial strawberries just occluded, or underripe?

The trick of course for the robotic picker is to pick the ripe strawberries within a certain time. Too slow and one would wonder why such technology would be created, too fast and there is a risk of damaging fruit. Although considering all this, a lot of strawberries bought at the grocery store aren’t exactly ripe, so maybe these constraints don’t matter much. Food for thought. Maybe human pickers do a better job? Maybe technology will one day out pick them… maybe.