
I’ve been working on a side project exploring whether modern image classification models can reliably identify plant species from photos alone, using large public biodiversity datasets (mainly iNaturalist / GBIF).
I’ve put together a very early demo:
https://huggingface.co/spaces/juppy44/plant-classification
At this stage it’s purely a technical experiment, single images only, no extra context, and it runs on limited compute, so accuracy varies a lot depending on species and image quality.
What I’m mainly interested in hearing from people with ecology or plant science backgrounds is:
- where these kinds of tools usually fail in practice
- whether there are particular plant groups that are inherently hard to distinguish from images
- what common misidentifications you see in existing apps
If I get funding, the next stage is to include multiple photos for input as well as data such as lat/lon, date, etc which should greatly improve accuracy
by Lonely-Marzipan-9473

4 Comments
1. One photo is often not enough to properly identify a plant, as it might not show the relevant details, more details than can be seen in one photo are necessary, or an ID feature might necessitate a microscope or other specialized testing to determine (sometimes it’s as simple flavor or scent but can be more complicated).
2. Graminoids, pretty much all basal embryophytes, sometimes ferns. A lot of times due to issues in #1 but also because their morphology is so different from the flowering plants that they’re hard to make heads or tails of unless you’ve taken the time to learn how to understand what you’re looking at. So a layman can’t look at a list of ID suggestions and know which is the correct one.
I will just count myself as a gardener into the group of people that can give informations 🙂
If I understand right it is about the plant identification systems used by apps like PlantNet (one that is used a lot by fellow gardeners and non profecionals i know), INaturalist or for that same even Google image search. If it is not about that feel free to ignore this post!
The biggest issue I have with those systems is, that they basically just compare pictures and go with whatever looks the most similar. I noticed that specific apps are most of the time relatively at identifying if a leave border is crenate or dentate but they usually don’t pay attention to lanticelles or hairs etc. Even though these things can be key for a good identification. Also it does not take in account how big things are wich can make it specifically hard to tell apart species. Also stuff like biphormism isn’t always take into account or if something is a young or adult plant.
I always wish to be able to basically write in certain identification keys if the picture isn’t enough for an accurate identification. Basically saying it is a baby tree of 1.50m and has black lanticelles, so the app can tell me if baby Alnus glutinous has differently coloured lanticelles or if the app really doesn’t know what plant this is. Also root systems. Those apps usually can’t tell you the type or root systems, even though it can help identifying when you are pulling out weeds.
For me in my job the main use is giving hints for follow up researching what plant it is or helping when two people argue wich plant it is (because yes Fragaria and Waldsteinia can be distinguished by looking at the leaves but most people don’t memorize the differences wich makes pictures to compare very useful)
Hope that helps a bit.
Herbarium Director in the Neotropics here. I find this kind of tools are working quite well for well-known species with wide distribution, ornamental, commercial applications. But tens of species from mega diverse countries don’t have a decent picture, a scientific drawing, they’re under collected with just a couple historical samples in foreign herbaria, or they’re barely known thanks to botanical descriptions in Latin or old languages in books from 18th-19th century, no DNA sequences at all. Almost nobody is going back to the field looking for them. And we’re losing our last natural medicines and food, etc.
Of course we must keep developing this tech tools, but the botanical skills should be taught and promoted more than ever, because we’re losing our species unaware of this tragedy, without having a single pic of them. I think this tools are a great chance to get conscious of all this green living wonders around us!
Maybe this is useful:
Source: I do text-book-based ID of tracheophytes since 2000 and use photo ID apps since 2018.
There are some freaky genera that cannot or at least not relyably or fully be resolved by these tools, e.g. *Hieracium, Rubus, Oenothera, Dryopteris, Carex, Valeriana, Valerianella*. In these cases, identifying the species often requires very specific or tiny features which are probably not visible on photos taken by laypeople (such as details on rotting leaves of the previous year, details on trichomes, veination, or seeds/fruits etc.). I am sure there are also taxonomic issues within these genera.
The FloraIncognita App, which I use most of the time, merges problematic species into aggregates. You only get an aggregate as result and can look up what species are included. I think Obsidentify also allows to pick particular species in those difficult cases, which might cause mistakes as I suspect that not every user may know what an aggregate is. A fun thing with obsidentify is that it nearly always comes up with some low-score-suggestion, that may even be a beetle or a bird when you are trying to ID a moss.
I was always wondering if the photo ID tools try extracting classical morphological features from the photos or if that works with training- and test sets until it works but noone knows why?