About & Methodology
The AI Transparency Ledger is a free, community-driven database that tracks how companies use AI. Every entry is backed by cited sources and community verification.
What We Are
The AI Transparency Ledger is a Wikipedia-style reference database that documents how companies deploy artificial intelligence across industries. We track specific practices — not vague headlines — backed by primary sources you can read yourself.
We built this because finding reliable, centralized information about corporate AI use is genuinely hard. News articles are scattered. Company announcements are promotional. Academic research lags by years. We wanted a single place where you could look up a company in under two minutes and get a clear, sourced answer.
92% of people we surveyed said they wish a single source existed for this information. 77% said they would check it before making decisions about products and employers. We built the thing we all wanted.
The ledger is free, open, and will always be free. There is no paywall, no premium tier, and no advertising. Our only goal is to make corporate AI practices more visible.
Editorial Process
Entries go through a four-stage lifecycle: submission, review, publication, and ongoing verification.
- 1
Submission
Anyone can submit a report via the Report a Sighting form — no account required. Reports include a description of what was observed and an optional evidence link or screenshot.
- 2
Review
A moderator reviews each submission. They verify the claim against the provided source, assess whether the source meets our evidence standards, and check for duplicates.
- 3
Publication
Accepted submissions become structured entries linked to the company profile. Each practice gets a category, severity tag, and at least one verified source.
- 4
Ongoing Verification
Entries are periodically re-checked. If a company reverses a practice or new contradicting information emerges, entries are updated with full edit history preserved.
Every change to every entry is recorded in the public edit history visible on each company profile page. This append-only log is the primary mechanism against astroturfing — companies or their representatives cannot quietly alter entries without leaving a trace.
Evidence Standards
Every claim in this database must be backed by at least one cited source. We categorize sources into six types, ranked roughly by reliability:
Company Disclosure
Official statements, blog posts, press releases, or SEC filings from the company itself. Highest reliability for confirming the practice exists.
Regulatory Filing
Documents filed with government agencies (FTC, EU regulators, NLRB, etc.). High reliability; carries legal weight.
Academic Paper
Peer-reviewed research documenting AI practices. High reliability but may lag real-world deployment.
News Article
Reporting from established journalists. Accepted if the publication is reputable and the journalist covers the story with primary sources.
Social Media Post
Posts from company accounts or verified employees. Accepted with lower confidence weighting. Screenshots required.
Community Report
User-submitted observations. Always marked as unverified until corroborated by another source type. Treated as leads, not evidence.
We reject claims that are based solely on speculation, rumors, or sources that cannot be independently verified. When evidence is disputed, the disputed status is shown on the entry — we do not silently remove contested information.
Neutrality Commitment
We do not tell you how to feel about AI. We document what companies do and let you draw your own conclusions.
The UI does not use red/green good/bad framing. Status badges are factual, not emotional. “Confirmed AI Use” is descriptive, not a judgment. We track AI use — we do not rate it.
We do not accept payments from companies to influence their entries. We do not allow companies to submit corrections directly — they must go through the same community process as everyone else. All edits are logged and public.
We acknowledge that some AI practices are genuinely contested — a practice tagged “Replaces Human Labor” is factual, not an accusation. The severity tags describe real-world effects, not moral judgments.
If you believe an entry is inaccurate, use the Report a Sighting form to submit contradicting evidence. We will review it and update the entry with full edit trail visibility.
How to Contribute
The most valuable thing you can do is submit evidence. If you notice a company using AI — in a product, in an ad, in a job posting, in a legal filing — report it. It takes under 90 seconds.
Submit a Report
Use the Report a Sighting form. Describe what you noticed. Attach a URL or screenshot if you have one. No account required.
Share News Articles
When you read a news article about a company's AI practices, submit it as evidence. The article URL is enough — we'll review and link it.
Spread the Word
Link to company profiles when discussing AI practices in forums, social media, or conversations. Organic traffic and trust are the foundation of this project.
Team
The AI Transparency Ledger is a collaboration between students, Trent Maziarz @ UMass Amherst and Benjamin Brown @ CSU. We started this project after a survey of 64+ people confirmed what we already suspected: reliable, centralized information about corporate AI use simply doesn't exist yet.
This is a pre-seed, validation-phase project. We are not a company. We have no investors, no revenue, and only two employees. The codebase isn't open-source. If you want to contribute technically or editorially, reach out.
AI Transparency Ledger · UMass Amherst × CSU
Open-source community project