A Layered, Community-First Approach to Data Using Large Language Models (LLMs)

Using AI to support access to community-enabling knowledge

This app tests AI's usefulness (as of January 2026).

Nothing here is meant to be a black box, a shortcut, or an authority replacing people. The goal is the opposite: to make knowledge more visible, more inspectable, and more shareable—and to see whether a community resource like this can actually be built together.

What you're seeing in the interface is the result of a deliberately constrained, multi-step process designed to show where information comes from, how it expands, and where uncertainty still exists.

At the center of this experiment is a simple question:

Can we use new tools to support real community knowledge—without hiding the seams?

What You're Seeing in the App

For many fields in this app, you'll see information presented in layers or tiers (t1, t2, t3). That's intentional.

Instead of asking an AI for "the answer," and then wondering where it came from and how much was made hallucinated, the system builds responses step by step, starting with what is known and documented, and only later expanding into broader synthesis. Each layer has a different role, and each is shown openly.

This lets you see:

  • What is firmly sourced
  • What comes from additional references
  • What is inferred or synthesized
  • Where gaps still exist

You don't have to trust the AI. You can inspect it.

Three Tiers of Knowledge Exploration

Tier 1: Local & Trusted Sources

This layer uses only explicitly provided, trusted source material. It's probably missing some great sources.

  • If the Tier 1 sources didn't include it, it doesn't appear here.
  • If information is missing, that absence is stated clearly.
  • Uses an LLM to format and style the response.

This tier may feel incomplete—and that's the point. It shows the current state of documented knowledge from a small set of sources, not a polished, filled in narrative.

Tier 2: Expanded Inputs, Still Source-Bound

This layer adds information from additional, identified sources.

  • Can add missing details
  • Can surface disagreements between sources
  • Does broaden available context carefully
  • Uses an LLM to format and style the response.

It still does not rely on the AI's general knowledge. Everything added here is tied back to references.

Tier 3: Model Knowledge (Independent Diagnostic)

This tier operates independently—it has no access to what Tier 1 or Tier 2 produced. Its purpose is to reveal what the AI model knows on its own, based purely on its training.

Here, the AI:

  • Reports patterns it learned during training (species-level, genus-level, family-level, or broader ecological patterns)
  • Indicates where its knowledge is strong or weak
  • Shows where patterns break down or vary across taxa
  • Returns empty values when knowledge is fragmented or uncertain—this is a valid and informative outcome

This layer is a diagnostic instrument. It helps us understand what the model knows, how confidently it knows it, and where its knowledge generalizes or doesn't. Accuracy and transparency take precedence over completeness.

Why Do This Instead of "Just Using AI"?

Because ecological restoration, seed work, and community knowledge depend on trust.

This structure:

  • Makes uncertainty visible
  • Prevents quiet drift into confident-sounding errors
  • Lets novices and experts use the same system safely
  • Keeps humans in the loop

AI here is treated as a tool for structured reasoning, not a voice of authority.

The Rules You Don't See (But Benefit From)

Behind each field in the app are fixed content rules that the AI must follow:

  • What the field is for
  • What it is not allowed to contain
  • How long it can be
  • Whether it must be a list, a paragraph, or an enum
  • Whether "no data" is an acceptable outcome

The AI doesn't decide these things. People did.

That makes the data predictable, comparable, and reviewable—by humans and by software.

Why This Exists at All

This project has another purpose beyond producing plant data.

It is an experiment.

An experiment in:

  • What modern tools can actually do (and where they fail)
  • How subtle constraints change outcomes
  • How transparent AI systems feel to real users
  • Whether shared data can grow through community contribution instead of centralized control

And most importantly:

It implies a big human question: Can build something like this together

A Call to Action

This app is not finished. It is an offering and a vision of what could be. It is not complete. It stumbled into its current state.

If this works at all, it works because of people.

  • Got data?
  • Got field notes?
  • Got seedlings to photograph?
  • Got corrections?
  • Got a need for better shared resources?
  • Got ideas for how this could serve your community better?

Restoration is collective work. Knowledge has to be too.

The question isn't whether AI can do this.

The real questions are: What should we build? What needs to exist for the broad community to thrive? Can we build it?