Local Explainability¶
Local explainability module.
This module provides customized local explainability functionality for entity matching tasks.
- neer_match.local_explainability.lime(model, left, right, xindices, n=100)¶
Calculate local interpretable model-agnostic explanations.
Generates a sample of n instances around the pair at the given indices and fits a weighted linear model to explain the prediction of the model.
- Parameters:
model (
Union[DLMatchingModel,NSMatchingModel]) – The matching model to use.left (
DataFrame) – The left DataFrame.right (
DataFrame) – The right DataFrame.xindices (
List[int]) – The indices of the pair to explain.n (
int) – The number of samples to generate.
- Return type:
DataFrame
- neer_match.local_explainability.shap(model, left, right, xindices, xkey, iterations=100)¶
Calculate the Shapley value of a key.
Estimates the Shapley value of a key for a pair of instances by sampling random sets of features and comparing the model predictions with and without the key.
- Parameters:
model (
Union[DLMatchingModel,NSMatchingModel]) – The matching model to use.left (
DataFrame) – The left DataFrame.right (
DataFrame) – The right DataFrame.xindices (
List[int]) – The indices of the pair to explain.xkey (
str) – The key for which to calculate the Shapley value.iterations (
int) – The number of iterations to use.
- Return type:
float