Open-source MCPEval makes protocol-level agent testing plug-and-play


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


Enterprises are beginning to adopt the Model Context Protocol (MCP) primarily to facilitate the identification and guidance of agent tool use. However, researchers from Salesforce discovered another way to utilize MCP technology, this time to aid in evaluating AI agents themselves. 

The researchers unveiled MCPEval, a new method and open-source toolkit built on the architecture of the MCP system that tests agent performance when using tools. They noted current evaluation methods for agents are limited in that these “often relied on static, pre-defined tasks, thus failing to capture the interactive real-world agentic workflows.”

“MCPEval goes beyond traditional success/failure metrics by systematically collecting detailed task trajectories and protocol interaction data, creating unprecedented visibility into agent behavior and generating valuable datasets for iterative improvement,” the researchers said in the paper. “Additionally, because both task creation and verification are fully automated, the resulting high-quality trajectories can be immediately leveraged for rapid fine-tuning and continual improvement of agent models. The comprehensive evaluation reports generated by MCPEval also provide actionable insights towards the correctness of agent-platform communication at a granular level.”

MCPEval differentiates itself by being a fully automated process, which the researchers claimed allows for rapid evaluation of new MCP tools and servers. It both gathers information on how agents interact with tools within an MCP server, generates synthetic data and creates a database to benchmark agents. Users can choose which MCP servers and tools within those servers to test the agent’s performance on. 


The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Secure your spot now – space is limited: https://bit.ly/3GuuPLF


Shelby Heinecke, senior AI research manager at Salesforce and one of the paper’s authors, told VentureBeat that it is challenging to obtain accurate data on agent performance, particularly for agents in domain-specific roles. 

“We’ve gotten to the point where if you look across the tech industry, a lot of us have figured out how to deploy them. We now need to figure out how to evaluate them properly,” Heinecke said. “MCP is a very new idea, a very new paradigm. So, it’s great that agents are gonna have access to tools, but we again need to evaluate the agents on those tools. That’s exactly what MCPEval is all about.”

How it works

MCPEval’s framework takes on a task generation, verification and model evaluation design. Leveraging multiple large language models (LLMs) so users can choose to work with models they are more familiar with, agents can be evaluated through a variety of available LLMs in the market. 

Enterprises can access MCPEval through an open-source toolkit released by Salesforce. Through a dashboard, users configure the server by selecting a model, which then automatically generates tasks for the agent to follow within the chosen MCP server. 

Once the user verifies the tasks, MCPEval then takes the tasks and determines the tool calls needed as ground truth. These tasks will be used as the basis for the test. Users choose which model they prefer to run the evaluation. MCPEval can generate a report on how well the agent and the test model functioned in accessing and using these tools. 

MCPEval not only gathers data to benchmark agents, Heinecke said, but it can also identify gaps in agent performance. Information gleaned by evaluating agents through MCPEval works not only to test performance but also to train the agents for future use. 

“We see MCPEval growing into a one-stop shop for evaluating and fixing your agents,” Heinecke said. 

She added that what makes MCPEval stand out from other agent evaluators is that it brings the testing to the same environment in which the agent will be working. Agents are evaluated on how well they access tools within the MCP server to which they will likely be deployed. 

The paper noted that in experiments, GPT-4 models often provided the best evaluation results. 

Evaluating agent performance

The need for enterprises to begin testing and monitoring agent performance has led to a boom of frameworks and techniques. Some platforms offer testing and several more methods to evaluate both short-term and long-term agent performance. 

AI agents will perform tasks on behalf of users, often without the need for a human to prompt them. So far, agents have proven to be useful, but they can get overwhelmed by the sheer amount of tools at their disposal.  

Galileo, a startup, offers a framework that enables enterprises to assess the quality of an agent’s tool selection and identify errors. Salesforce launched capabilities on its Agentforce dashboard to test agents. Researchers from Singapore Management University released AgentSpec to achieve and monitor agent reliability. Several academic studies on MCP evaluation have also been published, including MCP-Radar and MCPWorld.

MCP-Radar, developed by researchers from the University of Massachusetts Amherst and Xi’an Jiaotong University, focuses on more general domain skills, such as software engineering or mathematics. This framework prioritizes efficiency and parameter accuracy. 

On the other hand, MCPWorld from Beijing University of Posts and Telecommunications brings benchmarking to graphical user interfaces, APIs, and other computer-use agents.

Heinecke said ultimately, how agents are evaluated will depend on the company and the use case. However, what is crucial is that enterprises select the most suitable evaluation framework for their specific needs. For enterprises, she suggested considering a domain-specific framework to thoroughly test how agents function in real-world scenarios.

“There’s value in each of these evaluation frameworks, and these are great starting points as they give some early signal to how strong the gent is,” Heinecke said. “But I think the most important evaluation is your domain-specific evaluation and coming up with evaluation data that reflects the environment in which the agent is going to be operating in.”



Source link

Share

Latest Updates

Frequently Asked Questions

Related Articles

tata technologies: Tata Technologies to fully acquire ES-Tec Group for nearly Rs 775 crore

Global product engineering and digital services firm Tata Technologies on Saturday said it...

Albania Appoints an AI as Government Official

Albania has appointed the world's first-ever AI government official in hopes of rooting...

Cut the noise and dive into history, science, and culture with MagellanTV

TL;DR: Stream thousands of documentaries across science, history, and culture with lifetime access to MagellanTV...

Access Denied

Access Denied You don't have permission to access "http://www.gadgets360.com/science/news/canadian-startup-qubic-unveils-cryogenic-amplifier-that-could-transform-quantum-computing-9267232" on this server. Reference #18.79cfdb17.1757752917.49361e70 https://errors.edgesuite.net/18.79cfdb17.1757752917.49361e70 Source...
sabung ayam online sabung ayam online sabung ayam online sabung ayam online sabung ayam online Sabung Ayam Online Sv388 Sv388 SV388 sabung ayam online sabung ayam online Sv388 Sabung Ayam Online sabung ayam online sabung ayam online sabung ayam online Sabung ayam online Sabung ayam online SV388 sabung ayam online sabung ayam online sabung ayam online sabung ayam online sabung ayam online sabung ayam online SV388 sabung ayam online SV388 SV388 Sabung Ayam Online Sabung Ayam Online SABUNG AYAM ONLINE Sabung Ayam Online Sabung Ayam Online Sv388 SV388 SV388 sabung ayam online sv388 sv388 sabung ayam online sv388
judi bola judi bola Judi bola SBOBET judi bola judi bola judi bola Judi Bola Online judi bola judi bola judi bola judi bola judi bola judi bola juara303 juara303 Judi bola online judi bola judi bola judi bola judi bola judi bola judi bola judi bola judi bola SBOBET88 SBOBET judi bola judi bola judi bola Judi Bola SBOBET88 SBOBET88 judi bola judi bola judi bola JUDI BOLA ONLINE JUDI BOLA ONLINE SBOBET88 Judi Bola Judi Bola judi bola judi bola judi bola judi bola judi bola Judi Bola Online Judi Bola Online judi bola judi bola
CASINO ONLINE SLOT GACOR live casino mahjong ways Sbobet88 Hongkong pools Live Casino Online Slot Gacor Mahjong Ways slot pulsa Casino Online Slot Gacor Mix Parlay live casino online live casino online LIVE CASINO ONLINE LIVE CASINO ONLINE slot pulsa slot pulsa slot pulsa situs bola Mpo Slot
https://ejurnal.staidarulkamal.ac.id/ https://doctorsnutritionprogram.com/ https://nielsen-restaurante.com/ https://www.atobapizzaria.com.br/ https://casadeapoio.com.br/ https://bracoalemao.com.br/ https://letspetsresort.com.br/ https://mmsolucoesweb.com.br/ https://procao.com.br/
Rahasia Kemenangan di Mahjong Wild Pemain Tidak Menyangka Pola Scatter Jangan Anggap Remeh Mahjong Wild Pemain Pemula Heran Setelah Coba Mahjong Wild Menemukan Pola Rahasia yang Bikin Scatter Muncul Pola Scatter Rahasia yang Baru Terbongkar Pola Rahasia Pemain Pemula Terbongkar Mereka Ketagihan Karena Sering Dapat Kemenangan Mereka Ketagihan Karena Sering Dapat Kemenangan Trik Sederhana Saat Taruhan Kecil Pola Wild Liar Tersembunyi Bisa Menggandakan uang Pola Rahasia Baru Bisa Menghasilkan Wild Buktikan Pola Wild Liar dan Scatter Hitam Kaya Setelah Main Mahjong Wild Pria Asal Nepal Obrak-Abarik Kantor DPR