Tool Correctness
The tool correctness metric is an agentic LLM metric that assesses your LLM agent's function/tool calling ability. It is calculated by comparing whether every tool that is expected to be used was indeed called.
The ToolCorrectnessMetric is an agentic evaluation metric designed to evaluate an LLM Agent's function/tool-calling correctness.
Required Arguments
To use the ToolCorrectnessMetric, you'll have to provide the following arguments when creating an LLMTestCase:
inputactual_outputtools_calledexpected_tools
Example
from deepeval.metrics import ToolCorrectnessMetric
from deepeval.test_case import LLMTestCase
metric = ToolCorrectnessMetric()
test_case = LLMTestCase(
input="What if these shoes don't fit?",
actual_output="We offer a 30-day full refund at no extra cost.",
# Replace this with the tools that was actually used by your LLM agent
tools_called=["WebSearch"],
expected_tools=["WebSearch", "ToolQuery"]
)
metric.measure(test_case)
print(metric.score)
print(metric.reason)
There are four optional parameters when creating a ToolCorrectnessMetric:
- [Optional]
threshold: a float representing the minimum passing threshold, defaulted to 0.5. - [Optional]
include_reason: a boolean which when set toTrue, will include a reason for its evaluation score. Defaulted toTrue. - [Optional]
strict_mode: a boolean which when set toTrue, enforces a binary metric score: 1 for perfection, 0 otherwise. It also overrides the current threshold and sets it to 1. Defaulted toFalse. - [Optional]
verbose_mode: a boolean which when set toTrue, prints the intermediate steps used to calculate said metric to the console, as outlined in the How Is It Calculated section. Defaulted toFalse. - [Optional]
should_consider_ordering: a boolean which when set toTrue, will consider the ordering in which the tools were called in. For example, ifexpected_tools=["WebSearch", "ToolQuery", "WebSearch"]andtools_called=["WebSearch", "WebSearch"], the metric will consider the tool calling to be correct. Defaulted toFalse. - [Optional]
should_exact_match: a boolean which when set toTrue, will required thetools_calledandexpected_toolsto be exactly the same. Defaulted toFalse.
Since should_exact_match is a stricter criteria than should_consider_ordering, setting should_consider_ordering will have no effect when should_exact_match is set to True.
How Is It Calculated?
The ToolCorrectnessMetric, unlike all other deepeval metrics, are not calculated using any models or LLMs, and instead via exact matching between the expected_tools and tools_called parameters.
The tool correctness metric score is calculated according to the following equation:
This metric assesses the accuracy of your agent's tool usage by comparing the tools_called by your LLM agent to the list of expected_tools. A score of 1 indicates that every tool utilized by your LLM agent were called correctly according to the list of expected_tools, should_consider_ordering, and should_exact_match, while a score of 0 signifies that none of the tools_called were called correctly.