Can Model Context Protocol (MCP) help 'open' win?

Right now, many of us are suspicious of AI as something being done to us rather than for us. Corporate-driven efforts to advance capabilities feel more in service of stock prices than uplifting and empowering users' lives. This lack of control and user influence is why the open source AI conversation feels so urgent right now (among other reasons).
In that complex and passionate effort around openness, something that's grabbed my attention is Model Context Protocol (MCP). MCP is an open source protocol that standardizes and enables interoperable connections between LLMs and any given data source or tool.
Most of what you've heard about MCP to this point is likely about boosting productivity, because exposing tools and data to our work processes is exciting from a productivity standpoint (although 'how productive' is still up for debate).
What interests me most right now is how MCP can help us influence LLM output with our own agency around what tools and data we trust, and ultimately what we, the users, actually want. The opportunity reminds me of the old Mozilla Webmaker mantra: "move from users of the web to actively creating/making," except this time for AI.
With 'open' in mind, here are some areas I'm playing with or would love to see evolve.
(Open Innovation) Reduce dependency on platforms to ship features you want
Sometimes when you have a niche user case, or curiosity its harder to get that to the top of product queue.
I used the GitHub MCP server with Claude Desktop to build a 'project view of contributions to a project in the last year, sorted by contributor association (personal, corporate, academic). This can be helpful in deciding which contributors need more support than others. Based on your GitHub token, this can provide internal data as well (for association of 'employee').

Note: I had to purchase a basic tier from Claude to avoid hitting limits, which makes it 'not quite open enough,' but it's a start. It also took both Claude and GitHub Copilot with a code review by ChatGPT to get the code right :D
(Open Data) Make ALL open datasets accessible
I know from my time volunteering with Canada's Open Government Stakeholders Forum the painstaking process of shipping open data and then making it discoverable and usable for citizens who want to use it and hold their government accountable. MCP could help us break open data out of silos and make it discoverable for this and other things that matter to people in their day-to-day lives.
For Open Government: "Does this email in my inbox look like anything mentioned in the RCMP fraud dataset?"
For Open Education: "Of my daughter's university choices (local file list), which have courses with open textbooks (BC Campus tool)?"
For choosing Open Source: "Does this project (or list of projects) have a code of conduct? Is the sentiment in the community conversations positive? What's the OpenSSF Scorecard?"
Leveraging Open science: "What openly available scientific datasets can help me challenge this government policy?"
Potential candidate (openDataMCP).
(Open Source AI) Ask an LLM if a model, weights, and data are open source according to criteria you set
Right now, if I ask Copilot for an open source LLM, it will suggest Llama, which actually comes with license restrictions. There's no way to provide context to Copilot for what you mean by open source.
Creating an MCP server from https://isitopen.ai/ would be an interesting first start. By insisting on openness in what we build, we influence demand for open source solutions.
On Sharing
Not sure, but starting with this post!
One thing that I ran into here, was 'how do I share' my GitHub contributions "project view"? Do I share the prompts (although there were so many, and in circles its hard to recreate), do I share the output (HTML) for others to fork and give to their agents as an example? Do I list (in a README) subscriptions required to run those commands? Do I include steps for people very new to the technology for steps things like creating Hugging Face and GitHub tokens? Do I share the process (having one LLM review another LLMs code). I love what Hugging Face Spaces provides with access to models working, maybe there's something like that for being able to post things we make with MCP for others to easily try in a similar way.
How can we make 'testing' more accessible and inclusive from a cost-perspective? If someone has to purchase a tier of Claude just to test my project - that's a problem.
On Trust & Security
As I was building some prototypes and thinking about this, I realized that it would be quite easy to have the LLM 'add fake data' to my contribution report, which almost instantly removes the 'trust' factor of what I've built. How do I validate my outputs for myself and others to show I have retained the integrity of the data I started with. Trust is really the center of everything.
On security: if I grant a server access to my accounts on third-party platforms, how can I be sure my local data is actually safe? How do I avoid disclosing something?
Last Words
So many questions still, but MCP does give me hope in the wash of all things AI—that we can influence the direction of toward openness. It's a very important, exciting and opportune time to insist on it.
Acknowledgment: a2a is another OSS protocol tool; its omission is only because I haven't had time to invest in learning it yet!
Disclaimer: I used em-dashes before AI, any instances of dashes were put there by me, because I like them 😸
Disclaimer: If I have missed something, which I am sure I have especially around things like trust and security, or existing solutions - please leave a comment - I would love to learn from you.