How CorpAI reviews MCP servers before they reach your catalog
CorpAI now analyzes the code, dependencies, and scan artifacts behind vetted public MCP servers before making them simple for enterprise teams to deploy.
What changed
Vetted MCP servers now pass through a dedicated scanning service, with security evidence surfaced in the CorpAI admin dashboard.
Know what is being reviewed
Each review is tied to concrete server metadata, source location, image tag, and image digest.
Run source and dependency checks
A scanner service evaluates source patterns, dependency vulnerabilities, and coverage signals.
Produce evidence people can use
The scan creates a normalized report, SBOM, score, source metadata, and security posture.
Make deployment posture-aware
Approved servers move smoothly; higher-risk servers require explicit review or are blocked.
An MCP server is not just another integration listing. It is executable software that can sit between an AI agent and an enterprise system. When a server is vetted for the CorpAI catalog, the question is not only whether it works. The question is whether there is enough security evidence for an enterprise admin to decide how it should be deployed.
This feature adds that evidence layer. For vetted public MCP servers, CorpAI reviews the relevant source and dependency footprint, produces scan artifacts, assigns a security posture, and makes that posture part of the admin experience. The goal is simple: make adoption faster without turning catalog deployment into blind trust.
The result is visible as more than a generic "verified" badge. Admins can see whether a server is approved, requires review, is blocked, or is still being analyzed. They can inspect the score, source metadata, report, and SBOM when a deeper review is needed.
Why MCP Needs a Gate
MCP servers vary widely in risk. A read-only helper with narrow inputs and no credentials is very different from a server that can write records, execute commands, read files, call arbitrary URLs, or use privileged tokens on behalf of a user. A catalog needs to represent that difference in a way administrators can act on.
Static analysis gives us a repeatable first pass before anything is deployed. It can spot categories of risk that are visible in code and dependency metadata: command execution paths, server-side request forgery patterns, unsafe file access, risky deserialization, weak token handling, vulnerable packages, and gaps between a server's advertised behavior and its implementation. It does not prove a server is safe, but it gives security teams useful evidence before runtime controls take over. For this feature, static analysis is paired with dependency analysis so the review covers both project code and the third-party packages the server brings with it.
What the Scanner Checks
The scanner starts by making the review concrete. Instead of treating a server as a loose name, the scan is tied to source location, image metadata, tag, digest, Dockerfile context when available, and the set of tools the server exposes. This matters because tags can move and repositories can change. A useful scan result needs to be traceable to the artifact that was actually reviewed.
The scan itself runs in a purpose-built scanning service. That service prepares source, runs the scanning tools, stores artifacts, computes a score, and returns a normalized decision. The catalog uses that decision to show posture and enforce deployment behavior.
The current scanner runtime uses familiar open-source security tooling. Semgrep performs source-level static analysis, including CorpAI rules tuned for MCP server risks. Syft generates a software bill of materials, and Grype evaluates dependency vulnerabilities from that inventory. Those tools produce raw findings; CorpAI normalizes them into a report that is easier to compare across servers.
We avoid treating scanner output as a black box. Each result is turned into a risk score, a pass/fail/review/error outcome, a set of blocking controls when applicable, and references to artifacts. The score is not meant to replace judgment. It is a compact signal that helps teams quickly distinguish "looks clean," "needs attention," and "do not deploy without remediation."
Some details stay intentionally abstract in a public write-up. We do not publish every rule, threshold, or decision path. What matters for users is the model: source and dependencies are checked, the evidence is preserved, and the deployment experience changes based on the resulting posture.
Security Report
A normalized report groups source findings and dependency findings by severity.
SBOM
A downloadable software bill of materials gives teams a dependency inventory to inspect.
Source Trace
Scan results include source metadata so reviewers know what code the result corresponds to.
A completed scan maps into a catalog posture: approved, review required, blocked, or unknown. Approved servers can follow the normal deployment path. Review-required servers are visible in the catalog, but deployment requires an explicit admin decision. Blocked servers cannot be deployed until the posture changes.
Existing catalog entries can also be rescanned. That matters because security is not static: source code changes, dependencies age, and new vulnerability data appears. A server that looked acceptable last month may deserve a fresh look today.
What Admins See
This feature is visible in the product, not hidden behind the scenes. The admin dashboard exposes security posture directly in the MCP server catalog and on the server detail page. Teams can see whether a server is approved, blocked, under analysis, or marked for review. They can inspect the score, last scanned time, source metadata, and the latest error if a scan did not complete.
For technical review, CorpAI renders the security report into source and dependency findings, grouped by severity. Admins can download the report or the SBOM for deeper review, ticketing, or vendor follow-up. If a server needs another look, they can request a rescan from the same workflow.
Review is a real state
A server can be useful without being automatically safe for every organization. CorpAI makes that distinction visible, so admins can decide with evidence instead of guessing from a catalog label.
This is the kind of control enterprise teams tend to need. They rarely want a single binary answer for every tool. They want a system that makes the default path safer, preserves evidence, and gives accountable humans a way to make context-specific decisions.
What Teams Gain
For admins, this turns catalog adoption into an evidence-backed workflow. Instead of deciding from a name, logo, and description, they can see how a server was reviewed, what the scanner found, and whether the server should deploy normally or receive extra attention.
For security teams, it creates a consistent language for MCP server risk: posture, score, report, SBOM, source metadata, and review state. That consistency makes it easier to compare servers, ask better questions, and keep review work focused on the places that deserve it.
Most importantly, it helps enterprises move faster with more confidence. Useful MCP servers can be adopted with less manual friction, while higher-risk servers are clearly marked before they reach sensitive workflows. That is the balance CorpAI is aiming for: speed where it is earned, friction where it is useful, and evidence available when people need to make a decision.
Want to see how CorpAI vets and deploys MCP servers?