Attribution Methods¶
¶
Attribution methods quantitatively measure the contribution of each of a function's individual inputs to its output. Gradientbased attribution methods compute the gradient of a model with respect to its inputs to describe how important each input is towards the output prediction. These methods can be applied to assist in explaining deep networks.
TruLens provides implementations of several such techniques, found in this package.
AttributionMethod
¶
Interface used by all attribution methods.
An attribution method takes a neural network model and provides the ability to assign values to the variables of the network that specify the importance of each variable towards particular predictions.
model: ModelWrapper
property
readonly
¶
Model for which attributions are calculated.
__init__(self, model, *args, **kwargs)
special
¶
Abstract constructor.
Parameters:
Name  Type  Description  Default 

model 
ModelWrapper 
Model for which attributions are calculated. 
required 
Source code in trulens/nn/attribution.py
@abstractmethod
def __init__(self, model: ModelWrapper, *args, **kwargs):
"""
Abstract constructor.
Parameters:
model :
Model for which attributions are calculated.
"""
self._model = model
attributions(self, *model_args, **model_kwargs)
¶
Returns attributions for the given input. Attributions are in the same shape as the layer that attributions are being generated for.
The numeric scale of the attributions will depend on the specific implementations of the Distribution of Interest and Quantity of Interest. However it is generally related to the scale of gradients on the Quantity of Interest.
For example, Integrated Gradients uses the linear interpolation Distribution of Interest which subsumes the completeness axiom which ensures the sum of all attributions of a record equals the output determined by the Quantity of Interest on the same record.
The Point Distribution of Interest will be determined by the gradient at a single point, thus being a good measure of model sensitivity.
Parameters:
Name  Type  Description  Default 

model_args, 
model_kwargs 
The args and kwargs given to the call method of a model.
This should represent the records to obtain attributions for,
assumed to be a batched input. if 
required 
Returns:
Type  Description 


An array of attributions, matching the shape and type of 
Source code in trulens/nn/attribution.py
@abstractmethod
def attributions(self, *model_args, **model_kwargs):
"""
Returns attributions for the given input. Attributions are in the same shape
as the layer that attributions are being generated for.
The numeric scale of the attributions will depend on the specific implementations
of the Distribution of Interest and Quantity of Interest. However it is generally
related to the scale of gradients on the Quantity of Interest.
For example, Integrated Gradients uses the linear interpolation Distribution of Interest
which subsumes the completeness axiom which ensures the sum of all attributions of a record
equals the output determined by the Quantity of Interest on the same record.
The Point Distribution of Interest will be determined by the gradient at a single point,
thus being a good measure of model sensitivity.
Parameters:
model_args, model_kwargs:
The args and kwargs given to the call method of a model.
This should represent the records to obtain attributions for,
assumed to be a *batched* input. if `self.model` supports
evaluation on *data tensors*, the appropriate tensor type may
be used (e.g., Pytorch models may accept Pytorch tensors in
addition to `np.ndarray`s). The shape of the inputs must match
the input shape of `self.model`.
Returns:
An array of attributions, matching the shape and type of `from_cut`
of the slice. Each entry in the returned array represents the degree
to which the corresponding feature affected the model's outcome on
the corresponding point.
"""
raise NotImplementedError
InputAttribution
¶
Attributions of input features on either internal or output quantities. This is essentially an alias for
InternalInfluence(
model,
(trulens.nn.slices.InputCut(), cut),
qoi,
doi,
multiply_activation)
__init__(self, model, cut=None, qoi='max', doi='point', multiply_activation=True)
special
¶
Parameters:
Name  Type  Description  Default 

model 
ModelWrapper 
Model for which attributions are calculated. 
required 
cut 
Optional[Union[trulens.nn.slices.Cut, int, str]] 
The cut determining the layer from which the QoI is derived.
Expects a If an If a

None 
qoi 
Union[trulens.nn.quantities.QoI, int, Tuple[int], Callable, str] 
quantities.QoI  int  tuple  str
Quantity of interest to attribute. Expects a If an
If a tuple or list of two integers is given, then the quantity of interest is taken to be the comparative quantity for the class given by the first integer against the class given by the second integer, i.e.,
If a callable is given, it is interpreted as a function representing the QoI, i.e.,
If the string,

'max' 
doi 
Union[trulens.nn.distributions.DoI, str] 
distributions.DoI  str
Distribution of interest over inputs. Expects a If the string,
If the string,

'point' 
multiply_activation 
bool 
bool, optional Whether to multiply the gradient result by its corresponding activation, thus converting from "influence space" to "attribution space." 
True 
Source code in trulens/nn/attribution.py
def __init__(
self,
model: ModelWrapper,
cut: CutLike = None,
qoi: QoiLike = 'max',
doi: DoiLike = 'point',
multiply_activation: bool = True):
"""
Parameters:
model :
Model for which attributions are calculated.
cut :
The cut determining the layer from which the QoI is derived.
Expects a `Cut` object, or a related type that can be
interpreted as a `Cut`, as documented below.
If an `int` is given, it represents the index of a layer in
`model`.
If a `str` is given, it represents the name of a layer in
`model`.
`None` is an alternative for `slices.OutputCut()`.
qoi : quantities.QoI  int  tuple  str
Quantity of interest to attribute. Expects a `QoI` object, or a
related type that can be interpreted as a `QoI`, as documented
below.
If an `int` is given, the quantity of interest is taken to be
the slice output for the class/neuron/channel specified by the
given integer, i.e.,
```python
quantities.InternalChannelQoI(qoi)
```
If a tuple or list of two integers is given, then the quantity
of interest is taken to be the comparative quantity for the
class given by the first integer against the class given by the
second integer, i.e.,
```python
quantities.ComparativeQoI(*qoi)
```
If a callable is given, it is interpreted as a function
representing the QoI, i.e.,
```python
quantities.LambdaQoI(qoi)
```
If the string, `'max'`, is given, the quantity of interest is
taken to be the output for the class with the maximum score,
i.e.,
```python
quantities.MaxClassQoI()
```
doi : distributions.DoI  str
Distribution of interest over inputs. Expects a `DoI` object, or
a related type that can be interpreted as a `DoI`, as documented
below.
If the string, `'point'`, is given, the distribution is taken to
be the single point passed to `attributions`, i.e.,
```python
distributions.PointDoi()
```
If the string, `'linear'`, is given, the distribution is taken
to be the linear interpolation from the zero input to the point
passed to `attributions`, i.e.,
```python
distributions.LinearDoi()
```
multiply_activation : bool, optional
Whether to multiply the gradient result by its corresponding
activation, thus converting from "*influence space*" to
"*attribution space*."
"""
super().__init__(
model, (InputCut(), cut),
qoi,
doi,
multiply_activation=multiply_activation)
IntegratedGradients
¶
Implementation for the Integrated Gradients method from the following paper:
Axiomatic Attribution for Deep Networks
This should be cited using:
@INPROCEEDINGS{
sundararajan17axiomatic,
author={Mukund Sundararajan and Ankur Taly, and Qiqi Yan},
title={Axiomatic Attribution for Deep Networks},
booktitle={International Conference on Machine Learning (ICML)},
year={2017},
}
This is essentially an alias for
InternalInfluence(
model,
(trulens.nn.slices.InputCut(), trulens.nn.slices.OutputCut()),
'max',
trulens.nn.distributions.LinearDoi(baseline, resolution),
multiply_activation=True)
__init__(self, model, baseline=None, resolution=50)
special
¶
Parameters:
Name  Type  Description  Default 

model 
ModelWrapper 
Model for which attributions are calculated. 
required 
baseline 

The baseline to interpolate from. Must be same shape as the
input. If 
None 
resolution 
int 
Number of points to use in the approximation. A higher resolution is more computationally expensive, but gives a better approximation of the mathematical formula this attribution method represents. 
50 
Source code in trulens/nn/attribution.py
def __init__(
self, model: ModelWrapper, baseline=None, resolution: int = 50):
"""
Parameters:
model:
Model for which attributions are calculated.
baseline:
The baseline to interpolate from. Must be same shape as the
input. If `None` is given, the zero vector in the appropriate
shape will be used.
resolution:
Number of points to use in the approximation. A higher
resolution is more computationally expensive, but gives a better
approximation of the mathematical formula this attribution
method represents.
"""
super().__init__(
model,
OutputCut(),
'max',
LinearDoi(baseline, resolution),
multiply_activation=True)
InternalInfluence
¶
Internal attributions parameterized by a slice, quantity of interest, and distribution of interest.
The slice specifies the layers at which the internals of the model are to be exposed; it is represented by two cuts, which specify the layer the attributions are assigned to and the layer from which the quantity of interest is derived. The Quantity of Interest (QoI) is a function of the output specified by the slice that determines the network output behavior that the attributions are to describe. The Distribution of Interest (DoI) specifies the records over which the attributions are aggregated.
More information can be found in the following paper:
InfluenceDirected Explanations for Deep Convolutional Networks
This should be cited using:
@INPROCEEDINGS{
leino18influence,
author={
Klas Leino and
Shayak Sen and
Anupam Datta and
Matt Fredrikson and
Linyi Li},
title={
InfluenceDirected Explanations
for Deep Convolutional Networks},
booktitle={IEEE International Test Conference (ITC)},
year={2018},
}
__init__(self, model, cuts, qoi, doi, multiply_activation=True)
special
¶
Parameters:
Name  Type  Description  Default 

model 
ModelWrapper 
Model for which attributions are calculated. 
required 
cuts 
Optional[Union[trulens.nn.slices.Slice, Tuple[Union[trulens.nn.slices.Cut, int, str, NoneType]], trulens.nn.slices.Cut, int, str]] 
The slice to use when computing the attributions. The slice
keeps track of the layer whose output attributions are
calculated and the layer for which the quantity of interest is
computed. Expects a If a single A cut (or the cuts within the tuple) can also be represented as
an 
required 
qoi 
Union[trulens.nn.quantities.QoI, int, Tuple[int], Callable, str] 
Quantity of interest to attribute. Expects a If an
If a tuple or list of two integers is given, then the quantity of interest is taken to be the comparative quantity for the class given by the first integer against the class given by the second integer, i.e.,
If a callable is given, it is interpreted as a function representing the QoI, i.e.,
If the string,

required 
doi 
Union[trulens.nn.distributions.DoI, str] 
Distribution of interest over inputs. Expects a If the string,
If the string,

required 
multiply_activation 
bool 
Whether to multiply the gradient result by its corresponding activation, thus converting from "influence space" to "attribution space." 
True 
Source code in trulens/nn/attribution.py
def __init__(
self,
model: ModelWrapper,
cuts: SliceLike,
qoi: QoiLike,
doi: DoiLike,
multiply_activation: bool = True):
"""
Parameters:
model:
Model for which attributions are calculated.
cuts:
The slice to use when computing the attributions. The slice
keeps track of the layer whose output attributions are
calculated and the layer for which the quantity of interest is
computed. Expects a `Slice` object, or a related type that can
be interpreted as a `Slice`, as documented below.
If a single `Cut` object is given, it is assumed to be the cut
representing the layer for which attributions are calculated
(i.e., `from_cut` in `Slice`) and the layer for the quantity of
interest (i.e., `to_cut` in `slices.Slice`) is taken to be the
output of the network. If a tuple or list of two `Cut`s is
given, they are assumed to be `from_cut` and `to_cut`,
respectively.
A cut (or the cuts within the tuple) can also be represented as
an `int`, `str`, or `None`. If an `int` is given, it represents
the index of a layer in `model`. If a `str` is given, it
represents the name of a layer in `model`. `None` is an
alternative for `slices.InputCut`.
qoi:
Quantity of interest to attribute. Expects a `QoI` object, or a
related type that can be interpreted as a `QoI`, as documented
below.
If an `int` is given, the quantity of interest is taken to be
the slice output for the class/neuron/channel specified by the
given integer, i.e.,
```python
quantities.InternalChannelQoI(qoi)
```
If a tuple or list of two integers is given, then the quantity
of interest is taken to be the comparative quantity for the
class given by the first integer against the class given by the
second integer, i.e.,
```python
quantities.ComparativeQoI(*qoi)
```
If a callable is given, it is interpreted as a function
representing the QoI, i.e.,
```python
quantities.LambdaQoI(qoi)
```
If the string, `'max'`, is given, the quantity of interest is
taken to be the output for the class with the maximum score,
i.e.,
```python
quantities.MaxClassQoI()
```
doi:
Distribution of interest over inputs. Expects a `DoI` object, or
a related type that can be interpreted as a `DoI`, as documented
below.
If the string, `'point'`, is given, the distribution is taken to
be the single point passed to `attributions`, i.e.,
```python
distributions.PointDoi()
```
If the string, `'linear'`, is given, the distribution is taken
to be the linear interpolation from the zero input to the point
passed to `attributions`, i.e.,
```python
distributions.LinearDoi()
```
multiply_activation:
Whether to multiply the gradient result by its corresponding
activation, thus converting from "*influence space*" to
"*attribution space*."
"""
super().__init__(model)
self.slice = InternalInfluence.__get_slice(cuts)
self.qoi = InternalInfluence.__get_qoi(qoi)
self.doi = InternalInfluence.__get_doi(doi)
self._do_multiply = multiply_activation
attributions(self, *model_args, **model_kwargs)
¶
Returns attributions for the given input. Attributions are in the same shape as the layer that attributions are being generated for.
The numeric scale of the attributions will depend on the specific implementations of the Distribution of Interest and Quantity of Interest. However it is generally related to the scale of gradients on the Quantity of Interest.
For example, Integrated Gradients uses the linear interpolation Distribution of Interest which subsumes the completeness axiom which ensures the sum of all attributions of a record equals the output determined by the Quantity of Interest on the same record.
The Point Distribution of Interest will be determined by the gradient at a single point, thus being a good measure of model sensitivity.
Parameters:
Name  Type  Description  Default 

model_args, 
model_kwargs 
The args and kwargs given to the call method of a model.
This should represent the records to obtain attributions for,
assumed to be a batched input. if 
required 
Returns:
Type  Description 


An array of attributions, matching the shape and type of 
Source code in trulens/nn/attribution.py
def attributions(self, *model_args, **model_kwargs):
doi_cut = self.doi.cut() if self.doi.cut() else InputCut()
doi_val = self.model.fprop(model_args, model_kwargs, to_cut=doi_cut)
# DoI supports tensor or list of tensor. unwrap args to perform DoI on
# top level list
# Depending on the model_arg input, the data may be nested in data
# containers. We unwrap so that there operations are working on a single
# level of data container.
if isinstance(doi_val, DATA_CONTAINER_TYPE) and isinstance(
doi_val[0], DATA_CONTAINER_TYPE):
doi_val = doi_val[0]
if isinstance(doi_val, DATA_CONTAINER_TYPE) and len(doi_val) == 1:
doi_val = doi_val[0]
D = self.doi(doi_val)
n_doi = len(D)
D = InternalInfluence.__concatenate_doi(D)
# Calculate the gradient of each of the points in the DoI.
qoi_grads = self.model.qoi_bprop(
self.qoi,
model_args,
model_kwargs,
attribution_cut=self.slice.from_cut,
to_cut=self.slice.to_cut,
intervention=D,
doi_cut=doi_cut)
# Take the mean across the samples in the DoI.
if isinstance(qoi_grads, DATA_CONTAINER_TYPE):
attributions = [
get_backend().mean(
get_backend().reshape(
qoi_grad, (n_doi, 1) + qoi_grad.shape[1:]),
axis=0) for qoi_grad in qoi_grads
]
else:
attributions = get_backend().mean(
get_backend().reshape(
qoi_grads, (n_doi, 1) + qoi_grads.shape[1:]),
axis=0)
# Multiply by the activation multiplier if specified.
if self._do_multiply:
z_val = self.model.fprop(
model_args, model_kwargs, to_cut=self.slice.from_cut)
if isinstance(z_val, DATA_CONTAINER_TYPE) and len(z_val) == 1:
z_val = z_val[0]
if isinstance(attributions, DATA_CONTAINER_TYPE):
for i in range(len(attributions)):
if isinstance(z_val, DATA_CONTAINER_TYPE) and len(
z_val) == len(attributions):
attributions[i] *= self.doi.get_activation_multiplier(
z_val[i])
else:
attributions[i] *= (
self.doi.get_activation_multiplier(z_val))
else:
attributions *= self.doi.get_activation_multiplier(z_val)
return attributions