MCP or CLI? How to Choose Right Interface for Your AI Tools
嗯,用户让我帮忙总结一篇文章,控制在100字以内,而且不需要特定的开头。我先看看这篇文章讲的是什么。 文章主要讲的是作者从Obsidian的MCP服务器迁移到了CLI工具,因为MCP在使用过程中遇到了一些问题,比如类型验证被绕过、文件移动导致链接断裂等。而迁移到CLI后,这些问题都解决了,因为CLI使用了和桌面应用相同的内部API,确保了操作的一致性和正确性。 接着文章讨论了整个行业对MCP和CLI的选择趋势。微软和谷歌都推出了自己的CLI工具,并将MCP作为可选模式。作者分析了MCP和CLI各自的优缺点:MCP适合没有shell访问、需要共享状态或用户非技术性场景;而CLI则在大多数情况下更高效、通用且成本更低。 最后作者总结说,虽然MCP不会消失,但在大多数情况下,尤其是考虑到效率和通用性时,选择CLI更为合适。 现在我需要把这些内容浓缩到100字以内。要抓住关键点:迁移原因、行业趋势、优缺点对比以及结论。 可能的结构是:作者迁移原因,行业趋势转向CLI,分析两者的适用场景,最后强调CLI的优势。 确保语言简洁明了,不使用复杂的术语。 </think> 作者从Obsidian MCP迁移到CLI工具以解决类型验证和文件管理问题。行业趋势显示微软和谷歌也转向CLI,默认取代MCP。分析表明MCP适合无shell访问、多代理协作或非技术用户场景;而CLI在效率、成本和通用性上更具优势。结论强调 CLI 的广泛适用性和优势。 2026-4-7 19:50:7 Author: securityboulevard.com(查看原文) 阅读量:3 收藏

I recently migrated one of my personal AI workflows from using Obsidian MCP server to the Obsidian CLI.

Not because the Model Context Protocol (MCP) was broken. The server could read and write my vault just fine. The problem was subtler than that: Properties set through MCP bypassed Obsidian’s type validation, moved files left broken wikilinks behind, and templates had to be reconstructed from scratch every time.

This was getting annoying.

Switching to the Obsidian CLI fixed all that, as it allows LLMs to use Obsidian just like you or I would. It routes through the same internal API as the desktop app, so every operation gets the same validation, indexing, and link management.

I didn’t make that switch because of some philosophical stance on protocols. I made it because the CLI worked better for what I was actually doing.

Turns out I wasn’t alone in reaching that conclusion.

The Industry is Converging on the Same Answer

Earlier this year Microsoft shipped a Playwright CLI alongside their existing MCP server, explicitly citing token efficiency as the reason. Their README now says it directly: modern coding agents increasingly favor CLI because it avoids loading large tool schemas into context.

Two months later, Google released gws, a single CLI for all of Google Workspace, described as “built for humans and AI agents.” MCP support is available as an optional mode on top.

Neither company abandoned MCP. Both made CLI the default and positioned MCP as a layer you add when you specifically need it.

When you and two companies that size independently reach the same architectural conclusion, I believe it’s worth paying attention to why. So let’s explore the trade-offs between MCP and CLI, and when each one makes sense.

What MCP Actually Gets Right

Model Context Protocol gives LLMs a standardized, typed interface for calling external tools. Before MCP, every integration was custom glue code. MCP made tool-use portable across clients and models.

There are environments where MCP is the only viable option. Claude Desktop and sandboxed browser agents don’t have shell access. They can’t run commands. MCP is how they talk to the outside world, and it works.

MCP also handles persistent shared state across multiple agents better than CLI can. If you have several agents coordinating in the same live session, an MCP server gives them a shared context that isolated CLI calls can’t replicate.

The protocol was donated to the Linux Foundation earlier this year. It has a long future ahead, and it should.

The Costs That Show up in Production

MCP’s overhead only becomes obvious once you’re running real workloads.

  • Token cost. Microsoft’s own benchmarks show that Playwright MCP burns roughly 114K tokens per test session compared to about 27K for the CLI equivalent. That’s a 4x difference. Every MCP tool call includes the full schema definition in context. A CLI just runs a command and returns text.
  • Context degradation. MCP sessions tend to degrade after around 15 steps as accumulated state fills the context window. The model starts losing track of earlier results, making mistakes, and repeating itself. CLI interactions stay flat because each call is stateless. Step 50 is as clean as step 1.
  • Protocol lock-in. An MCP tool only works where MCP clients exist. A CLI works in Claude Code, Cursor, CI pipelines, cron jobs, shell scripts, and for humans like you and I sitting behind a terminal. That universality is the difference between a tool that works in your dev environment and a tool that works everywhere your code runs.

Why I Built the Phyllotaxis CLI tool

I ran into the same pattern from a different angle while building phyllotaxis, a tool for working with large OpenAPI specs in AI-assisted development.

OpenAPI specs are often too large to dump into an LLM’s context window, but too important to skip. If you’re building against an API, the model needs to know what endpoints exist, what they accept, and what they return.

Three MCP-based OpenAPI explorers already exist, so the problem is clearly real and validated. But I chose to build phyllotaxis as a CLI for the same reasons that kept coming up elsewhere: zero setup, works everywhere, readable by humans too.

The approach is progressive disclosure. You list endpoints to get an overview, inspect one to see its details, then fetch a specific schema if you need it. The model only pays for the context it actually needs. No giant tool schemas loaded upfront, no accumulated session state.

Could it have been an MCP server? Sure. But it would have been harder to install, limited to MCP-compatible clients, and awkward to call from a shell script or CI pipeline. The CLI version works in every environment I’ve tested it in, with no configuration.

Three cases Where MCP Wins (everything else is CLI)

The split is simpler than it looks.

Pick MCP when:

1) There’s no shell access. Sandboxed environments like Claude Desktop literally can’t run commands. MCP is the only option, and it’s a good one.

2) Multiple agents need shared state. If several agents are coordinating in the same live session, MCP’s persistent server gives them a shared context. CLI sessions are isolated by design, and that’s a problem here.

3) Your audience isn’t technical. If you’re distributing a tool through an integration registry and your users are installing it with a button click, an MCP server is much easier to hand them than a binary they need to put on their PATH.

Pick CLI for:

Everything else.

And here’s the thing: You can always add an MCP mode on top later. That’s exactly what Google did with gws mcp, and what I have planned for the phyllotaxis roadmap. Start with the universal interface, then wrap it for specific clients that need the protocol layer.

A Parting Thought

My Obsidian migration, building phyllotaxis, Microsoft and Google creating the Playwright and Google Workspace CLIs are all pointing at the same thing.

None of this means MCP is going away, and it shouldn’t. But the command line was the common interface between humans and machines long before LLMs showed up, and it turns out agents are happy to use it too. CLI works in more places, costs fewer tokens, and doesn’t require your users to install anything beyond the tool itself.

So before you build your next AI-facing tool, ask yourself: what does the agent actually need, and what’s the simplest way to give it that?

Most of the time, it’s just a command.

Related Reading


文章来源: https://securityboulevard.com/2026/04/mcp-or-cli-how-to-choose-right-interface-for-your-ai-tools/
如有侵权请联系:admin#unsafe.sh