Skip to content

Attribution Methods

trulens.nn.attribution

Attribution methods quantitatively measure the contribution of each of a function's individual inputs to its output. Gradient-based attribution methods compute the gradient of a model with respect to its inputs to describe how important each input is towards the output prediction. These methods can be applied to assist in explaining deep networks.

TruLens provides implementations of several such techniques, found in this package.

Classes

AttributionResult dataclass

_attribution method output container.

AttributionMethod

Bases: ABC

Interface used by all attribution methods.

An attribution method takes a neural network model and provides the ability to assign values to the variables of the network that specify the importance of each variable towards particular predictions.

Attributes
model property
model: ModelWrapper

Model for which attributions are calculated.

Functions
__init__ abstractmethod
__init__(model: ModelWrapper, rebatch_size: int = None, *args, **kwargs)

Abstract constructor.

PARAMETER DESCRIPTION
model

ModelWrapper Model for which attributions are calculated.

TYPE: ModelWrapper

rebatch_size

int (optional) Will rebatch instances to this size if given. This may be required for GPU usage if using a DoI which produces multiple instances per user-provided instance. Many valued DoIs will expand the tensors sent to each layer to original_batch_size * doi_size. The rebatch size will break up original_batch_size * doi_size into rebatch_size chunks to send to model.

TYPE: int DEFAULT: None

attributions
attributions(*model_args: ArgsLike, **model_kwargs: KwargsLike) -> Union[TensorLike, ArgsLike[TensorLike], ArgsLike[ArgsLike[TensorLike]]]

Returns attributions for the given input. Attributions are in the same shape as the layer that attributions are being generated for.

The numeric scale of the attributions will depend on the specific implementations of the Distribution of Interest and Quantity of Interest. However it is generally related to the scale of gradients on the Quantity of Interest.

For example, Integrated Gradients uses the linear interpolation Distribution of Interest which subsumes the completeness axiom which ensures the sum of all attributions of a record equals the output determined by the Quantity of Interest on the same record.

The Point Distribution of Interest will be determined by the gradient at a single point, thus being a good measure of model sensitivity.

PARAMETER DESCRIPTION
model_args

ArgsLike, model_kwargs: KwargsLike The args and kwargs given to the call method of a model. This should represent the records to obtain attributions for, assumed to be a batched input. if self.model supports evaluation on data tensors, the appropriate tensor type may be used (e.g., Pytorch models may accept Pytorch tensors in addition to np.ndarrays). The shape of the inputs must match the input shape of self.model.

TYPE: ArgsLike DEFAULT: ()

Returns - np.ndarray when single attribution_cut input, single qoi output - or ArgsLike[np.ndarray] when single input, multiple output (or vice versa) - or ArgsLike[ArgsLike[np.ndarray]] when multiple output (outer), multiple input (inner)

An array of attributions, matching the shape and type of `from_cut`
of the slice. Each entry in the returned array represents the degree
to which the corresponding feature affected the model's outcome on
the corresponding point.

If attributing to a component with multiple inputs, a list for each
will be returned.

If the quantity of interest features multiple outputs, a list for
each will be returned.

InternalInfluence

Bases: AttributionMethod

Internal attributions parameterized by a slice, quantity of interest, and distribution of interest.

The slice specifies the layers at which the internals of the model are to be exposed; it is represented by two cuts, which specify the layer the attributions are assigned to and the layer from which the quantity of interest is derived. The Quantity of Interest (QoI) is a function of the output specified by the slice that determines the network output behavior that the attributions are to describe. The Distribution of Interest (DoI) specifies the records over which the attributions are aggregated.

More information can be found in the following paper:

Influence-Directed Explanations for Deep Convolutional Networks

This should be cited using:

@INPROCEEDINGS{
    leino18influence,
    author={
        Klas Leino and
        Shayak Sen and
        Anupam Datta and
        Matt Fredrikson and
        Linyi Li},
    title={
        Influence-Directed Explanations
        for Deep Convolutional Networks},
    booktitle={IEEE International Test Conference (ITC)},
    year={2018},
}
Functions
__init__
__init__(model: ModelWrapper, cuts: SliceLike, qoi: QoiLike, doi: DoiLike, multiply_activation: bool = True, return_grads: bool = False, return_doi: bool = False, *args, **kwargs)
PARAMETER DESCRIPTION
model

Model for which attributions are calculated.

TYPE: ModelWrapper

cuts

The slice to use when computing the attributions. The slice keeps track of the layer whose output attributions are calculated and the layer for which the quantity of interest is computed. Expects a Slice object, or a related type that can be interpreted as a Slice, as documented below.

If a single Cut object is given, it is assumed to be the cut representing the layer for which attributions are calculated (i.e., from_cut in Slice) and the layer for the quantity of interest (i.e., to_cut in slices.Slice) is taken to be the output of the network. If a tuple or list of two Cuts is given, they are assumed to be from_cut and to_cut, respectively.

A cut (or the cuts within the tuple) can also be represented as an int, str, or None. If an int is given, it represents the index of a layer in model. If a str is given, it represents the name of a layer in model. None is an alternative for slices.InputCut.

TYPE: SliceLike

qoi

Quantity of interest to attribute. Expects a QoI object, or a related type that can be interpreted as a QoI, as documented below.

If an int is given, the quantity of interest is taken to be the slice output for the class/neuron/channel specified by the given integer, i.e.,

quantities.InternalChannelQoI(qoi)

If a tuple or list of two integers is given, then the quantity of interest is taken to be the comparative quantity for the class given by the first integer against the class given by the second integer, i.e.,

quantities.ComparativeQoI(*qoi)

If a callable is given, it is interpreted as a function representing the QoI, i.e.,

quantities.LambdaQoI(qoi)

If the string, 'max', is given, the quantity of interest is taken to be the output for the class with the maximum score, i.e.,

quantities.MaxClassQoI()

TYPE: QoiLike

doi

Distribution of interest over inputs. Expects a DoI object, or a related type that can be interpreted as a DoI, as documented below.

If the string, 'point', is given, the distribution is taken to be the single point passed to attributions, i.e.,

distributions.PointDoi()

If the string, 'linear', is given, the distribution is taken to be the linear interpolation from the zero input to the point passed to attributions, i.e.,

distributions.LinearDoi()

TYPE: DoiLike

multiply_activation

Whether to multiply the gradient result by its corresponding activation, thus converting from "influence space" to "attribution space."

TYPE: bool DEFAULT: True

InputAttribution

Bases: InternalInfluence

Attributions of input features on either internal or output quantities. This is essentially an alias for

InternalInfluence(
    model,
    (trulens.nn.slices.InputCut(), cut),
    qoi,
    doi,
    multiply_activation)
Functions
__init__
__init__(model: ModelWrapper, qoi_cut: CutLike = None, qoi: QoiLike = 'max', doi_cut: CutLike = None, doi: DoiLike = 'point', multiply_activation: bool = True, *args, **kwargs)
PARAMETER DESCRIPTION
model

Model for which attributions are calculated.

qoi_cut

The cut determining the layer from which the QoI is derived. Expects a Cut object, or a related type that can be interpreted as a Cut, as documented below.

If an int is given, it represents the index of a layer in model.

If a str is given, it represents the name of a layer in model.

None is an alternative for slices.OutputCut().

DEFAULT: None

qoi

quantities.QoI | int | tuple | str Quantity of interest to attribute. Expects a QoI object, or a related type that can be interpreted as a QoI, as documented below.

If an int is given, the quantity of interest is taken to be the slice output for the class/neuron/channel specified by the given integer, i.e., python quantities.InternalChannelQoI(qoi)

If a tuple or list of two integers is given, then the quantity of interest is taken to be the comparative quantity for the class given by the first integer against the class given by the second integer, i.e., ```python quantities.ComparativeQoI(*qoi)

If a callable is given, it is interpreted as a function
representing the QoI, i.e., ```python quantities.LambdaQoI(qoi)

If the string, 'max', is given, the quantity of interest is taken to be the output for the class with the maximum score, i.e., python quantities.MaxClassQoI()

DEFAULT: 'max'

doi_cut

For models which have non-differentiable pre-processing at the start of the model, specify the cut of the initial differentiable input form. For NLP models, for example, this could point to the embedding layer. If not provided, InputCut is assumed.

DEFAULT: None

doi

distributions.DoI | str Distribution of interest over inputs. Expects a DoI object, or a related type that can be interpreted as a DoI, as documented below.

If the string, 'point', is given, the distribution is taken to be the single point passed to attributions, i.e., python distributions.PointDoi()

If the string, 'linear', is given, the distribution is taken to be the linear interpolation from the zero input to the point passed to attributions, i.e., python distributions.LinearDoi()

DEFAULT: 'point'

multiply_activation

bool, optional Whether to multiply the gradient result by its corresponding activation, thus converting from "influence space" to "attribution space."

DEFAULT: True

IntegratedGradients

Bases: InputAttribution

Implementation for the Integrated Gradients method from the following paper:

Axiomatic Attribution for Deep Networks

This should be cited using:

@INPROCEEDINGS{
    sundararajan17axiomatic,
    author={Mukund Sundararajan and Ankur Taly, and Qiqi Yan},
    title={Axiomatic Attribution for Deep Networks},
    booktitle={International Conference on Machine Learning (ICML)},
    year={2017},
}

This is essentially an alias for

InternalInfluence(
    model,
    (trulens.nn.slices.InputCut(), trulens.nn.slices.OutputCut()),
    'max',
    trulens.nn.distributions.LinearDoi(baseline, resolution),
    multiply_activation=True)
Functions
__init__
__init__(model: ModelWrapper, baseline=None, resolution: int = 50, doi_cut=None, qoi='max', qoi_cut=None, *args, **kwargs)
PARAMETER DESCRIPTION
model

Model for which attributions are calculated.

TYPE: ModelWrapper

baseline

The baseline to interpolate from. Must be same shape as the input. If None is given, the zero vector in the appropriate shape will be used.

DEFAULT: None

resolution

Number of points to use in the approximation. A higher resolution is more computationally expensive, but gives a better approximation of the mathematical formula this attribution method represents.

TYPE: int DEFAULT: 50

Functions