Using AI to support access to community-enabling knowledge
This app tests AI's usefulness (as of January 2026).
Nothing here is meant to be a black box, a shortcut, or an authority replacing people. The goal is the opposite: to make knowledge more visible, more inspectable, and more shareable—and to see whether a community resource like this can actually be built together.
What you're seeing in the interface is the result of a deliberately constrained, multi-step process designed to show where information comes from, how it expands, and where uncertainty still exists.
At the center of this experiment is a simple question:
Can we use new tools to support real community knowledge—without hiding the seams?
For many fields in this app, you'll see information presented in layers or tiers (t1, t2, t3). That's intentional.
Instead of asking an AI for "the answer," and then wondering where it came from and how much was made hallucinated, the system builds responses step by step, starting with what is known and documented, and only later expanding into broader synthesis. Each layer has a different role, and each is shown openly.
This lets you see:
You don't have to trust the AI. You can inspect it.
This layer uses only explicitly provided, trusted source material. It's probably missing some great sources.
This tier may feel incomplete—and that's the point. It shows the current state of documented knowledge from a small set of sources, not a polished, filled in narrative.
This layer adds information from additional, identified sources.
It still does not rely on the AI's general knowledge. Everything added here is tied back to references.
This tier operates independently—it has no access to what Tier 1 or Tier 2 produced. Its purpose is to reveal what the AI model knows on its own, based purely on its training.
Here, the AI:
This layer is a diagnostic instrument. It helps us understand what the model knows, how confidently it knows it, and where its knowledge generalizes or doesn't. Accuracy and transparency take precedence over completeness.
Because ecological restoration, seed work, and community knowledge depend on trust.
This structure:
AI here is treated as a tool for structured reasoning, not a voice of authority.
Behind each field in the app are fixed content rules that the AI must follow:
The AI doesn't decide these things. People did.
That makes the data predictable, comparable, and reviewable—by humans and by software.
This project has another purpose beyond producing plant data.
It is an experiment.
An experiment in:
And most importantly:
It implies a big human question: Can build something like this together
This app is not finished. It is an offering and a vision of what could be. It is not complete. It stumbled into its current state.
If this works at all, it works because of people.
Restoration is collective work. Knowledge has to be too.
The question isn't whether AI can do this.
The real questions are: What should we build? What needs to exist for the broad community to thrive? Can we build it?