Skip to content

Visualization Methods

One clear use case for measuring attributions is for human consumption. In order to be fully leveraged by humans, explanations need to be interpretable — a large vector of numbers doesn’t in general make us more confident we understand what a network is doing. We therefore view an explanation as comprised of both an attribution measurement and an interpretation of what the attribution values represent.

One obvious way to interpret attributions, particularly in the image domain, is via visualization. This module provides several visualization methods for interpreting attributions as images.

ChannelMaskVisualizer

Bases: object

Uses internal influence to visualize the pixels that are most salient towards a particular internal channel or neuron.

Source code in trulens_explain/trulens/visualizations.py
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
class ChannelMaskVisualizer(object):
    """
    Uses internal influence to visualize the pixels that are most salient
    towards a particular internal channel or neuron.
    """

    def __init__(
        self,
        model,
        layer,
        channel,
        channel_axis=None,
        agg_fn=None,
        doi=None,
        blur=None,
        threshold=0.5,
        masked_opacity=0.2,
        combine_channels: bool = True,
        use_attr_as_opacity=None,
        positive_only=None
    ):
        """
        Configures the default parameters for the `__call__` method (these can 
        be overridden by passing in values to `__call__`).

        Parameters:
            model:
                The wrapped model whose channel we're visualizing.

            layer:
                The identifier (either index or name) of the layer in which the 
                channel we're visualizing resides.

            channel:
                Index of the channel (for convolutional layers) or internal 
                neuron (for fully-connected layers) that we'd like to visualize.

            channel_axis:
                If different from the channel axis specified by the backend, the
                supplied `channel_axis` will be used if operating on a 
                convolutional layer with 4-D image format.

            agg_fn:
                Function with which to aggregate the remaining dimensions 
                (except the batch dimension) in order to get a single scalar 
                value for each channel; If `None`, a sum over each neuron in the
                channel will be taken. This argument is not used when the 
                channels are scalars, e.g., for dense layers.

            doi:
                The distribution of interest to use when computing the input
                attributions towards the specified channel. If `None`, 
                `PointDoI` will be used.

            blur:
                Gives the radius of a Gaussian blur to be applied to the 
                attributions before visualizing. This can be used to help focus
                on salient regions rather than specific salient pixels.

            threshold:
                Value in the range [0, 1]. Attribution values at or  below the 
                percentile given by `threshold` (after normalization, blurring,
                etc.) will be masked.

            masked_opacity: 
                Value in the range [0, 1] specifying the opacity for the parts
                of the image that are masked.

            combine_channels:
                If `True`, the attributions will be averaged across the channel
                dimension, resulting in a 1-channel attribution map.

            use_attr_as_opacity:
                If `True`, instead of using `threshold` and `masked_opacity`,
                the opacity of each pixel is given by the 0-1-normalized 
                attribution value.

            positive_only:
                If `True`, only pixels with positive attribution will be 
                unmasked (or given nonzero opacity when `use_attr_as_opacity` is
                true).
        """
        B = get_backend()
        if (B is not None and (channel_axis is None or channel_axis < 0)):
            channel_axis = B.channel_axis
        elif (channel_axis is None or channel_axis < 0):
            channel_axis = 1

        self.mask_visualizer = MaskVisualizer(
            blur, threshold, masked_opacity, combine_channels,
            use_attr_as_opacity, positive_only
        )

        self.infl_input = InternalInfluence(
            model, (InputCut(), Cut(layer)),
            InternalChannelQoI(channel, channel_axis, agg_fn),
            PointDoi() if doi is None else doi
        )

    def __call__(
        self,
        x,
        x_preprocessed=None,
        output_file=None,
        blur=None,
        threshold=None,
        masked_opacity=None,
        combine_channels=None
    ):
        """
        Visualizes the given attributions by overlaying an attribution heatmap 
        over the given image.

        Parameters
        ----------
        attributions : numpy.ndarray
            The attributions to visualize. Expected to be in 4-D image format.

        x : numpy.ndarray
            The original image(s) over which the attributions are calculated.
            Must be the same shape as expected by the model used with this
            visualizer.

        x_preprocessed : numpy.ndarray, optional
            If the model requires a preprocessed input (e.g., with the mean
            subtracted) that is different from how the image should be 
            visualized, ``x_preprocessed`` should be specified. In this case 
            ``x`` will be used for visualization, and ``x_preprocessed`` will be
            passed to the model when calculating attributions. Must be the same 
            shape as ``x``.

        output_file : str, optional
            If specified, the resulting visualization will be saved to a file
            with the name given by ``output_file``.

        blur : float, optional
            If specified, gives the radius of a Gaussian blur to be applied to
            the attributions before visualizing. This can be used to help focus
            on salient regions rather than specific salient pixels. If None, 
            defaults to the value supplied to the constructor. Default None.

        threshold : float
            Value in the range [0, 1]. Attribution values at or  below the 
            percentile given by ``threshold`` will be masked. If None, defaults 
            to the value supplied to the constructor. Default None.

        masked_opacity: float
            Value in the range [0, 1] specifying the opacity for the parts of
            the image that are masked. Default 0.2. If None, defaults to the 
            value supplied to the constructor. Default None.

        combine_channels : bool
            If True, the attributions will be averaged across the channel
            dimension, resulting in a 1-channel attribution map. If None, 
            defaults to the value supplied to the constructor. Default None.
        """

        attrs_input = self.infl_input.attributions(
            x if x_preprocessed is None else x_preprocessed
        )

        return self.mask_visualizer(
            attrs_input, x, output_file, blur, threshold, masked_opacity,
            combine_channels
        )

__call__(x, x_preprocessed=None, output_file=None, blur=None, threshold=None, masked_opacity=None, combine_channels=None)

Visualizes the given attributions by overlaying an attribution heatmap over the given image.

Parameters
numpy.ndarray

The attributions to visualize. Expected to be in 4-D image format.

numpy.ndarray

The original image(s) over which the attributions are calculated. Must be the same shape as expected by the model used with this visualizer.

numpy.ndarray, optional

If the model requires a preprocessed input (e.g., with the mean subtracted) that is different from how the image should be visualized, x_preprocessed should be specified. In this case x will be used for visualization, and x_preprocessed will be passed to the model when calculating attributions. Must be the same shape as x.

str, optional

If specified, the resulting visualization will be saved to a file with the name given by output_file.

float, optional

If specified, gives the radius of a Gaussian blur to be applied to the attributions before visualizing. This can be used to help focus on salient regions rather than specific salient pixels. If None, defaults to the value supplied to the constructor. Default None.

float

Value in the range [0, 1]. Attribution values at or below the percentile given by threshold will be masked. If None, defaults to the value supplied to the constructor. Default None.

float

Value in the range [0, 1] specifying the opacity for the parts of the image that are masked. Default 0.2. If None, defaults to the value supplied to the constructor. Default None.

bool

If True, the attributions will be averaged across the channel dimension, resulting in a 1-channel attribution map. If None, defaults to the value supplied to the constructor. Default None.

Source code in trulens_explain/trulens/visualizations.py
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
def __call__(
    self,
    x,
    x_preprocessed=None,
    output_file=None,
    blur=None,
    threshold=None,
    masked_opacity=None,
    combine_channels=None
):
    """
    Visualizes the given attributions by overlaying an attribution heatmap 
    over the given image.

    Parameters
    ----------
    attributions : numpy.ndarray
        The attributions to visualize. Expected to be in 4-D image format.

    x : numpy.ndarray
        The original image(s) over which the attributions are calculated.
        Must be the same shape as expected by the model used with this
        visualizer.

    x_preprocessed : numpy.ndarray, optional
        If the model requires a preprocessed input (e.g., with the mean
        subtracted) that is different from how the image should be 
        visualized, ``x_preprocessed`` should be specified. In this case 
        ``x`` will be used for visualization, and ``x_preprocessed`` will be
        passed to the model when calculating attributions. Must be the same 
        shape as ``x``.

    output_file : str, optional
        If specified, the resulting visualization will be saved to a file
        with the name given by ``output_file``.

    blur : float, optional
        If specified, gives the radius of a Gaussian blur to be applied to
        the attributions before visualizing. This can be used to help focus
        on salient regions rather than specific salient pixels. If None, 
        defaults to the value supplied to the constructor. Default None.

    threshold : float
        Value in the range [0, 1]. Attribution values at or  below the 
        percentile given by ``threshold`` will be masked. If None, defaults 
        to the value supplied to the constructor. Default None.

    masked_opacity: float
        Value in the range [0, 1] specifying the opacity for the parts of
        the image that are masked. Default 0.2. If None, defaults to the 
        value supplied to the constructor. Default None.

    combine_channels : bool
        If True, the attributions will be averaged across the channel
        dimension, resulting in a 1-channel attribution map. If None, 
        defaults to the value supplied to the constructor. Default None.
    """

    attrs_input = self.infl_input.attributions(
        x if x_preprocessed is None else x_preprocessed
    )

    return self.mask_visualizer(
        attrs_input, x, output_file, blur, threshold, masked_opacity,
        combine_channels
    )

__init__(model, layer, channel, channel_axis=None, agg_fn=None, doi=None, blur=None, threshold=0.5, masked_opacity=0.2, combine_channels=True, use_attr_as_opacity=None, positive_only=None)

Configures the default parameters for the __call__ method (these can be overridden by passing in values to __call__).

Parameters:

Name Type Description Default
model

The wrapped model whose channel we're visualizing.

required
layer

The identifier (either index or name) of the layer in which the channel we're visualizing resides.

required
channel

Index of the channel (for convolutional layers) or internal neuron (for fully-connected layers) that we'd like to visualize.

required
channel_axis

If different from the channel axis specified by the backend, the supplied channel_axis will be used if operating on a convolutional layer with 4-D image format.

None
agg_fn

Function with which to aggregate the remaining dimensions (except the batch dimension) in order to get a single scalar value for each channel; If None, a sum over each neuron in the channel will be taken. This argument is not used when the channels are scalars, e.g., for dense layers.

None
doi

The distribution of interest to use when computing the input attributions towards the specified channel. If None, PointDoI will be used.

None
blur

Gives the radius of a Gaussian blur to be applied to the attributions before visualizing. This can be used to help focus on salient regions rather than specific salient pixels.

None
threshold

Value in the range [0, 1]. Attribution values at or below the percentile given by threshold (after normalization, blurring, etc.) will be masked.

0.5
masked_opacity

Value in the range [0, 1] specifying the opacity for the parts of the image that are masked.

0.2
combine_channels bool

If True, the attributions will be averaged across the channel dimension, resulting in a 1-channel attribution map.

True
use_attr_as_opacity

If True, instead of using threshold and masked_opacity, the opacity of each pixel is given by the 0-1-normalized attribution value.

None
positive_only

If True, only pixels with positive attribution will be unmasked (or given nonzero opacity when use_attr_as_opacity is true).

None
Source code in trulens_explain/trulens/visualizations.py
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
def __init__(
    self,
    model,
    layer,
    channel,
    channel_axis=None,
    agg_fn=None,
    doi=None,
    blur=None,
    threshold=0.5,
    masked_opacity=0.2,
    combine_channels: bool = True,
    use_attr_as_opacity=None,
    positive_only=None
):
    """
    Configures the default parameters for the `__call__` method (these can 
    be overridden by passing in values to `__call__`).

    Parameters:
        model:
            The wrapped model whose channel we're visualizing.

        layer:
            The identifier (either index or name) of the layer in which the 
            channel we're visualizing resides.

        channel:
            Index of the channel (for convolutional layers) or internal 
            neuron (for fully-connected layers) that we'd like to visualize.

        channel_axis:
            If different from the channel axis specified by the backend, the
            supplied `channel_axis` will be used if operating on a 
            convolutional layer with 4-D image format.

        agg_fn:
            Function with which to aggregate the remaining dimensions 
            (except the batch dimension) in order to get a single scalar 
            value for each channel; If `None`, a sum over each neuron in the
            channel will be taken. This argument is not used when the 
            channels are scalars, e.g., for dense layers.

        doi:
            The distribution of interest to use when computing the input
            attributions towards the specified channel. If `None`, 
            `PointDoI` will be used.

        blur:
            Gives the radius of a Gaussian blur to be applied to the 
            attributions before visualizing. This can be used to help focus
            on salient regions rather than specific salient pixels.

        threshold:
            Value in the range [0, 1]. Attribution values at or  below the 
            percentile given by `threshold` (after normalization, blurring,
            etc.) will be masked.

        masked_opacity: 
            Value in the range [0, 1] specifying the opacity for the parts
            of the image that are masked.

        combine_channels:
            If `True`, the attributions will be averaged across the channel
            dimension, resulting in a 1-channel attribution map.

        use_attr_as_opacity:
            If `True`, instead of using `threshold` and `masked_opacity`,
            the opacity of each pixel is given by the 0-1-normalized 
            attribution value.

        positive_only:
            If `True`, only pixels with positive attribution will be 
            unmasked (or given nonzero opacity when `use_attr_as_opacity` is
            true).
    """
    B = get_backend()
    if (B is not None and (channel_axis is None or channel_axis < 0)):
        channel_axis = B.channel_axis
    elif (channel_axis is None or channel_axis < 0):
        channel_axis = 1

    self.mask_visualizer = MaskVisualizer(
        blur, threshold, masked_opacity, combine_channels,
        use_attr_as_opacity, positive_only
    )

    self.infl_input = InternalInfluence(
        model, (InputCut(), Cut(layer)),
        InternalChannelQoI(channel, channel_axis, agg_fn),
        PointDoi() if doi is None else doi
    )

HTML

Bases: Output

HTML visualization output format.

Source code in trulens_explain/trulens/visualizations.py
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
class HTML(Output):
    """HTML visualization output format."""

    def __init__(self):
        try:
            self.m_html = importlib.import_module("html")
        except:
            raise ImportError(
                "HTML output requires html python module. Try 'pip install html'."
            )

    def blank(self):
        return ""

    def space(self):
        return "&nbsp;"

    def escape(self, s):
        return self.m_html.escape(s)

    def linebreak(self):
        return "<br/>"

    def line(self, s):
        return f"<span style='padding: 2px; margin: 2px; background: gray; border-radius: 4px;'>{s}</span>"

    def magnitude_colored(self, s, mag):
        red = 0.0
        green = 0.0
        if mag > 0:
            green = 1.0  # 0.5 + mag * 0.5
            red = 1.0 - mag * 0.5
        else:
            red = 1.0
            green = 1.0 + mag * 0.5
            #red = 0.5 - mag * 0.5

        blue = min(red, green)
        # blue = 1.0 - max(red, green)

        return f"<span title='{mag:0.3f}' style='margin: 1px; padding: 1px; border-radius: 4px; background: black; color: rgb({red*255}, {green*255}, {blue*255});'>{s}</span>"

    def append(self, *pieces):
        return ''.join(pieces)

    def render(self, s):
        return s

HeatmapVisualizer

Bases: Visualizer

Visualizes attributions by overlaying an attribution heatmap over the original image, similar to how GradCAM visualizes attributions.

Source code in trulens_explain/trulens/visualizations.py
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
class HeatmapVisualizer(Visualizer):
    """
    Visualizes attributions by overlaying an attribution heatmap over the
    original image, similar to how GradCAM visualizes attributions.
    """

    def __init__(
        self,
        overlay_opacity=0.5,
        normalization_type=None,
        blur=10.,
        cmap='jet'
    ):
        """
        Configures the default parameters for the `__call__` method (these can 
        be overridden by passing in values to `__call__`).

        Parameters:
            overlay_opacity: float
                Value in the range [0, 1] specifying the opacity for the heatmap
                overlay.

            normalization_type:
                Specifies one of the following configurations for normalizing
                the attributions (each item is normalized separately):

                - `'unsigned_max'`: normalizes the attributions to the range 
                  [-1, 1] by dividing the attributions by the maximum absolute 
                  attribution value.
                - `'unsigned_max_positive_centered'`: same as above, but scales
                  the values to the range [0, 1], with negative scores less than
                  0.5 and positive scores greater than 0.5. 
                - `'magnitude_max'`: takes the absolute value of the 
                  attributions, then normalizes the attributions to the range 
                  [0, 1] by dividing by the maximum absolute attribution value.
                - `'magnitude_sum'`: takes the absolute value of the 
                  attributions, then scales them such that they sum to 1. If 
                  this option is used, each channel is normalized separately, 
                  such that each channel sums to 1.
                - `'signed_max'`: normalizes the attributions to the range 
                  [-1, 1] by dividing the positive values by the maximum 
                  positive attribution value and the negative values by the 
                  minimum negative attribution value.
                - `'signed_max_positive_centered'`: same as above, but scales 
                  the values to the range [0, 1], with negative scores less than
                  0.5 and positive scores greater than 0.5.
                - `'signed_sum'`: scales the positive attributions such that 
                  they sum to 1 and the negative attributions such that they
                  scale to -1. If this option is used, each channel is 
                  normalized separately.
                - `'01'`: normalizes the attributions to the range [0, 1] by 
                  subtracting the minimum attribution value then dividing by the
                  maximum attribution value.
                - `'unnormalized'`: leaves the attributions unaffected.

                If `None`, either `'unsigned_max'` (for single-channel data) or 
                `'unsigned_max_positive_centered'` (for multi-channel data) is
                used.

            blur:
                Gives the radius of a Gaussian blur to be applied to the 
                attributions before visualizing. This can be used to help focus
                on salient regions rather than specific salient pixels.

            cmap: matplotlib.colors.Colormap | str, optional
                Colormap or name of a Colormap to use for the visualization. If 
                `None`, the colormap will be chosen based on the normalization 
                type. This argument is only used for single-channel data
                (including when `combine_channels` is True).
        """

        super().__init__(
            combine_channels=True,
            normalization_type=normalization_type,
            blur=blur,
            cmap=cmap
        )

        self.default_overlay_opacity = overlay_opacity

    def __call__(
        self,
        attributions,
        x,
        output_file=None,
        imshow=True,
        fig=None,
        return_tiled=False,
        overlay_opacity=None,
        normalization_type=None,
        blur=None,
        cmap=None
    ) -> np.ndarray:
        """
        Visualizes the given attributions by overlaying an attribution heatmap 
        over the given image.

        Parameters:
            attributions:
                A `np.ndarray` containing the attributions to be visualized.

            x:
                A `np.ndarray` of items in the same shape as `attributions`
                corresponding to the records explained by the given 
                attributions. The visualization will be superimposed onto the
                corresponding set of records.

            output_file:
                File name to save the visualization image to. If `None`, no
                image will be saved, but the figure can still be displayed.

            imshow:
                If true, a the visualization will be displayed. Otherwise the
                figure will not be displayed, but the figure can still be saved.

            fig:
                The `pyplot` figure to display the visualization in. If `None`,
                a new figure will be created.

            return_tiled:
                If true, the returned array will be in the same shape as the
                visualization, with no batch dimension and the samples in the
                batch tiled along the width and height dimensions. If false, the
                returned array will be reshaped to match `attributions`.

            overlay_opacity: float
                Value in the range [0, 1] specifying the opacity for the heatmap
                overlay. If `None`, defaults to the value supplied to the 
                constructor.

            normalization_type:
                Specifies one of the following configurations for normalizing
                the attributions (each item is normalized separately):

                - `'unsigned_max'`: normalizes the attributions to the range 
                  [-1, 1] by dividing the attributions by the maximum absolute 
                  attribution value.
                - `'unsigned_max_positive_centered'`: same as above, but scales
                  the values to the range [0, 1], with negative scores less than
                  0.5 and positive scores greater than 0.5. 
                - `'magnitude_max'`: takes the absolute value of the 
                  attributions, then normalizes the attributions to the range 
                  [0, 1] by dividing by the maximum absolute attribution value.
                - `'magnitude_sum'`: takes the absolute value of the 
                  attributions, then scales them such that they sum to 1. If 
                  this option is used, each channel is normalized separately, 
                  such that each channel sums to 1.
                - `'signed_max'`: normalizes the attributions to the range 
                  [-1, 1] by dividing the positive values by the maximum 
                  positive attribution value and the negative values by the 
                  minimum negative attribution value.
                - `'signed_max_positive_centered'`: same as above, but scales 
                  the values to the range [0, 1], with negative scores less than
                  0.5 and positive scores greater than 0.5.
                - `'signed_sum'`: scales the positive attributions such that 
                  they sum to 1 and the negative attributions such that they
                  scale to -1. If this option is used, each channel is 
                  normalized separately.
                - `'01'`: normalizes the attributions to the range [0, 1] by 
                  subtracting the minimum attribution value then dividing by the
                  maximum attribution value.
                - `'unnormalized'`: leaves the attributions unaffected.

                If `None`, defaults to the value supplied to the constructor.

            blur:
                Gives the radius of a Gaussian blur to be applied to the 
                attributions before visualizing. This can be used to help focus
                on salient regions rather than specific salient pixels. If
                `None`, defaults to the value supplied to the constructor.

            cmap: matplotlib.colors.Colormap | str, optional
                Colormap or name of a Colormap to use for the visualization. If
                `None`, defaults to the value supplied to the constructor.

        Returns:
            A `np.ndarray` array of the numerical representation of the
            attributions as modified for the visualization. This includes 
            normalization, blurring, etc.
        """
        _, normalization_type, blur, cmap = self._check_args(
            attributions, None, normalization_type, blur, cmap
        )

        # Combine the channels.
        attributions = attributions.mean(
            axis=get_backend().channel_axis, keepdims=True
        )

        # Blur the attributions so the explanation is smoother.
        if blur:
            attributions = self._blur(attributions, blur)

        # Normalize the attributions.
        attributions = self._normalize(attributions, normalization_type)

        tiled_attributions = self.tiler.tile(attributions)

        # Normalize the pixels to be in the range [0, 1].
        x = self._normalize(x, '01')
        tiled_x = self.tiler.tile(x)

        if cmap is None:
            cmap = self.default_cmap

        if overlay_opacity is None:
            overlay_opacity = self.default_overlay_opacity

        # Display the figure:
        _fig = plt.figure() if fig is None else fig

        plt.axis('off')
        plt.imshow(tiled_x)
        plt.imshow(tiled_attributions, alpha=overlay_opacity, cmap=cmap)

        if output_file:
            plt.savefig(output_file, bbox_inches=0)

        if imshow:
            plt.show()

        elif fig is None:
            plt.close(_fig)

        return tiled_attributions if return_tiled else attributions

__call__(attributions, x, output_file=None, imshow=True, fig=None, return_tiled=False, overlay_opacity=None, normalization_type=None, blur=None, cmap=None)

Visualizes the given attributions by overlaying an attribution heatmap over the given image.

Parameters:

Name Type Description Default
attributions

A np.ndarray containing the attributions to be visualized.

required
x

A np.ndarray of items in the same shape as attributions corresponding to the records explained by the given attributions. The visualization will be superimposed onto the corresponding set of records.

required
output_file

File name to save the visualization image to. If None, no image will be saved, but the figure can still be displayed.

None
imshow

If true, a the visualization will be displayed. Otherwise the figure will not be displayed, but the figure can still be saved.

True
fig

The pyplot figure to display the visualization in. If None, a new figure will be created.

None
return_tiled

If true, the returned array will be in the same shape as the visualization, with no batch dimension and the samples in the batch tiled along the width and height dimensions. If false, the returned array will be reshaped to match attributions.

False
overlay_opacity

float Value in the range [0, 1] specifying the opacity for the heatmap overlay. If None, defaults to the value supplied to the constructor.

None
normalization_type

Specifies one of the following configurations for normalizing the attributions (each item is normalized separately):

  • 'unsigned_max': normalizes the attributions to the range [-1, 1] by dividing the attributions by the maximum absolute attribution value.
  • 'unsigned_max_positive_centered': same as above, but scales the values to the range [0, 1], with negative scores less than 0.5 and positive scores greater than 0.5.
  • 'magnitude_max': takes the absolute value of the attributions, then normalizes the attributions to the range [0, 1] by dividing by the maximum absolute attribution value.
  • 'magnitude_sum': takes the absolute value of the attributions, then scales them such that they sum to 1. If this option is used, each channel is normalized separately, such that each channel sums to 1.
  • 'signed_max': normalizes the attributions to the range [-1, 1] by dividing the positive values by the maximum positive attribution value and the negative values by the minimum negative attribution value.
  • 'signed_max_positive_centered': same as above, but scales the values to the range [0, 1], with negative scores less than 0.5 and positive scores greater than 0.5.
  • 'signed_sum': scales the positive attributions such that they sum to 1 and the negative attributions such that they scale to -1. If this option is used, each channel is normalized separately.
  • '01': normalizes the attributions to the range [0, 1] by subtracting the minimum attribution value then dividing by the maximum attribution value.
  • 'unnormalized': leaves the attributions unaffected.

If None, defaults to the value supplied to the constructor.

None
blur

Gives the radius of a Gaussian blur to be applied to the attributions before visualizing. This can be used to help focus on salient regions rather than specific salient pixels. If None, defaults to the value supplied to the constructor.

None
cmap

matplotlib.colors.Colormap | str, optional Colormap or name of a Colormap to use for the visualization. If None, defaults to the value supplied to the constructor.

None

Returns:

Type Description
np.ndarray

A np.ndarray array of the numerical representation of the

np.ndarray

attributions as modified for the visualization. This includes

np.ndarray

normalization, blurring, etc.

Source code in trulens_explain/trulens/visualizations.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
def __call__(
    self,
    attributions,
    x,
    output_file=None,
    imshow=True,
    fig=None,
    return_tiled=False,
    overlay_opacity=None,
    normalization_type=None,
    blur=None,
    cmap=None
) -> np.ndarray:
    """
    Visualizes the given attributions by overlaying an attribution heatmap 
    over the given image.

    Parameters:
        attributions:
            A `np.ndarray` containing the attributions to be visualized.

        x:
            A `np.ndarray` of items in the same shape as `attributions`
            corresponding to the records explained by the given 
            attributions. The visualization will be superimposed onto the
            corresponding set of records.

        output_file:
            File name to save the visualization image to. If `None`, no
            image will be saved, but the figure can still be displayed.

        imshow:
            If true, a the visualization will be displayed. Otherwise the
            figure will not be displayed, but the figure can still be saved.

        fig:
            The `pyplot` figure to display the visualization in. If `None`,
            a new figure will be created.

        return_tiled:
            If true, the returned array will be in the same shape as the
            visualization, with no batch dimension and the samples in the
            batch tiled along the width and height dimensions. If false, the
            returned array will be reshaped to match `attributions`.

        overlay_opacity: float
            Value in the range [0, 1] specifying the opacity for the heatmap
            overlay. If `None`, defaults to the value supplied to the 
            constructor.

        normalization_type:
            Specifies one of the following configurations for normalizing
            the attributions (each item is normalized separately):

            - `'unsigned_max'`: normalizes the attributions to the range 
              [-1, 1] by dividing the attributions by the maximum absolute 
              attribution value.
            - `'unsigned_max_positive_centered'`: same as above, but scales
              the values to the range [0, 1], with negative scores less than
              0.5 and positive scores greater than 0.5. 
            - `'magnitude_max'`: takes the absolute value of the 
              attributions, then normalizes the attributions to the range 
              [0, 1] by dividing by the maximum absolute attribution value.
            - `'magnitude_sum'`: takes the absolute value of the 
              attributions, then scales them such that they sum to 1. If 
              this option is used, each channel is normalized separately, 
              such that each channel sums to 1.
            - `'signed_max'`: normalizes the attributions to the range 
              [-1, 1] by dividing the positive values by the maximum 
              positive attribution value and the negative values by the 
              minimum negative attribution value.
            - `'signed_max_positive_centered'`: same as above, but scales 
              the values to the range [0, 1], with negative scores less than
              0.5 and positive scores greater than 0.5.
            - `'signed_sum'`: scales the positive attributions such that 
              they sum to 1 and the negative attributions such that they
              scale to -1. If this option is used, each channel is 
              normalized separately.
            - `'01'`: normalizes the attributions to the range [0, 1] by 
              subtracting the minimum attribution value then dividing by the
              maximum attribution value.
            - `'unnormalized'`: leaves the attributions unaffected.

            If `None`, defaults to the value supplied to the constructor.

        blur:
            Gives the radius of a Gaussian blur to be applied to the 
            attributions before visualizing. This can be used to help focus
            on salient regions rather than specific salient pixels. If
            `None`, defaults to the value supplied to the constructor.

        cmap: matplotlib.colors.Colormap | str, optional
            Colormap or name of a Colormap to use for the visualization. If
            `None`, defaults to the value supplied to the constructor.

    Returns:
        A `np.ndarray` array of the numerical representation of the
        attributions as modified for the visualization. This includes 
        normalization, blurring, etc.
    """
    _, normalization_type, blur, cmap = self._check_args(
        attributions, None, normalization_type, blur, cmap
    )

    # Combine the channels.
    attributions = attributions.mean(
        axis=get_backend().channel_axis, keepdims=True
    )

    # Blur the attributions so the explanation is smoother.
    if blur:
        attributions = self._blur(attributions, blur)

    # Normalize the attributions.
    attributions = self._normalize(attributions, normalization_type)

    tiled_attributions = self.tiler.tile(attributions)

    # Normalize the pixels to be in the range [0, 1].
    x = self._normalize(x, '01')
    tiled_x = self.tiler.tile(x)

    if cmap is None:
        cmap = self.default_cmap

    if overlay_opacity is None:
        overlay_opacity = self.default_overlay_opacity

    # Display the figure:
    _fig = plt.figure() if fig is None else fig

    plt.axis('off')
    plt.imshow(tiled_x)
    plt.imshow(tiled_attributions, alpha=overlay_opacity, cmap=cmap)

    if output_file:
        plt.savefig(output_file, bbox_inches=0)

    if imshow:
        plt.show()

    elif fig is None:
        plt.close(_fig)

    return tiled_attributions if return_tiled else attributions

__init__(overlay_opacity=0.5, normalization_type=None, blur=10.0, cmap='jet')

Configures the default parameters for the __call__ method (these can be overridden by passing in values to __call__).

Parameters:

Name Type Description Default
overlay_opacity

float Value in the range [0, 1] specifying the opacity for the heatmap overlay.

0.5
normalization_type

Specifies one of the following configurations for normalizing the attributions (each item is normalized separately):

  • 'unsigned_max': normalizes the attributions to the range [-1, 1] by dividing the attributions by the maximum absolute attribution value.
  • 'unsigned_max_positive_centered': same as above, but scales the values to the range [0, 1], with negative scores less than 0.5 and positive scores greater than 0.5.
  • 'magnitude_max': takes the absolute value of the attributions, then normalizes the attributions to the range [0, 1] by dividing by the maximum absolute attribution value.
  • 'magnitude_sum': takes the absolute value of the attributions, then scales them such that they sum to 1. If this option is used, each channel is normalized separately, such that each channel sums to 1.
  • 'signed_max': normalizes the attributions to the range [-1, 1] by dividing the positive values by the maximum positive attribution value and the negative values by the minimum negative attribution value.
  • 'signed_max_positive_centered': same as above, but scales the values to the range [0, 1], with negative scores less than 0.5 and positive scores greater than 0.5.
  • 'signed_sum': scales the positive attributions such that they sum to 1 and the negative attributions such that they scale to -1. If this option is used, each channel is normalized separately.
  • '01': normalizes the attributions to the range [0, 1] by subtracting the minimum attribution value then dividing by the maximum attribution value.
  • 'unnormalized': leaves the attributions unaffected.

If None, either 'unsigned_max' (for single-channel data) or 'unsigned_max_positive_centered' (for multi-channel data) is used.

None
blur

Gives the radius of a Gaussian blur to be applied to the attributions before visualizing. This can be used to help focus on salient regions rather than specific salient pixels.

10.0
cmap

matplotlib.colors.Colormap | str, optional Colormap or name of a Colormap to use for the visualization. If None, the colormap will be chosen based on the normalization type. This argument is only used for single-channel data (including when combine_channels is True).

'jet'
Source code in trulens_explain/trulens/visualizations.py
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def __init__(
    self,
    overlay_opacity=0.5,
    normalization_type=None,
    blur=10.,
    cmap='jet'
):
    """
    Configures the default parameters for the `__call__` method (these can 
    be overridden by passing in values to `__call__`).

    Parameters:
        overlay_opacity: float
            Value in the range [0, 1] specifying the opacity for the heatmap
            overlay.

        normalization_type:
            Specifies one of the following configurations for normalizing
            the attributions (each item is normalized separately):

            - `'unsigned_max'`: normalizes the attributions to the range 
              [-1, 1] by dividing the attributions by the maximum absolute 
              attribution value.
            - `'unsigned_max_positive_centered'`: same as above, but scales
              the values to the range [0, 1], with negative scores less than
              0.5 and positive scores greater than 0.5. 
            - `'magnitude_max'`: takes the absolute value of the 
              attributions, then normalizes the attributions to the range 
              [0, 1] by dividing by the maximum absolute attribution value.
            - `'magnitude_sum'`: takes the absolute value of the 
              attributions, then scales them such that they sum to 1. If 
              this option is used, each channel is normalized separately, 
              such that each channel sums to 1.
            - `'signed_max'`: normalizes the attributions to the range 
              [-1, 1] by dividing the positive values by the maximum 
              positive attribution value and the negative values by the 
              minimum negative attribution value.
            - `'signed_max_positive_centered'`: same as above, but scales 
              the values to the range [0, 1], with negative scores less than
              0.5 and positive scores greater than 0.5.
            - `'signed_sum'`: scales the positive attributions such that 
              they sum to 1 and the negative attributions such that they
              scale to -1. If this option is used, each channel is 
              normalized separately.
            - `'01'`: normalizes the attributions to the range [0, 1] by 
              subtracting the minimum attribution value then dividing by the
              maximum attribution value.
            - `'unnormalized'`: leaves the attributions unaffected.

            If `None`, either `'unsigned_max'` (for single-channel data) or 
            `'unsigned_max_positive_centered'` (for multi-channel data) is
            used.

        blur:
            Gives the radius of a Gaussian blur to be applied to the 
            attributions before visualizing. This can be used to help focus
            on salient regions rather than specific salient pixels.

        cmap: matplotlib.colors.Colormap | str, optional
            Colormap or name of a Colormap to use for the visualization. If 
            `None`, the colormap will be chosen based on the normalization 
            type. This argument is only used for single-channel data
            (including when `combine_channels` is True).
    """

    super().__init__(
        combine_channels=True,
        normalization_type=normalization_type,
        blur=blur,
        cmap=cmap
    )

    self.default_overlay_opacity = overlay_opacity

IPython

Bases: HTML

Interactive python visualization output format.

Source code in trulens_explain/trulens/visualizations.py
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
class IPython(HTML):
    """Interactive python visualization output format."""

    def __init__(self):
        super(IPython, self).__init__()
        try:
            self.m_ipy = importlib.import_module("IPython")
        except:
            raise ImportError(
                "Jupyter output requires IPython python module. Try 'pip install ipykernel'."
            )

    def render(self, s: str):
        html = HTML.render(self, s)
        return self.m_ipy.display.HTML(html)

MaskVisualizer

Bases: object

Visualizes attributions by masking the original image to highlight the regions with influence above a given threshold percentile. Intended particularly for use with input-attributions.

Source code in trulens_explain/trulens/visualizations.py
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
class MaskVisualizer(object):
    """
    Visualizes attributions by masking the original image to highlight the
    regions with influence above a given threshold percentile. Intended 
    particularly for use with input-attributions.
    """

    def __init__(
        self,
        blur=5.,
        threshold=0.5,
        masked_opacity=0.2,
        combine_channels=True,
        use_attr_as_opacity=False,
        positive_only=True
    ):
        """
        Configures the default parameters for the `__call__` method (these can 
        be overridden by passing in values to `__call__`).

        Parameters:
            blur:
                Gives the radius of a Gaussian blur to be applied to the 
                attributions before visualizing. This can be used to help focus
                on salient regions rather than specific salient pixels.

            threshold:
                Value in the range [0, 1]. Attribution values at or  below the 
                percentile given by `threshold` (after normalization, blurring,
                etc.) will be masked.

            masked_opacity: 
                Value in the range [0, 1] specifying the opacity for the parts
                of the image that are masked.

            combine_channels:
                If `True`, the attributions will be averaged across the channel
                dimension, resulting in a 1-channel attribution map.

            use_attr_as_opacity:
                If `True`, instead of using `threshold` and `masked_opacity`,
                the opacity of each pixel is given by the 0-1-normalized 
                attribution value.

            positive_only:
                If `True`, only pixels with positive attribution will be 
                unmasked (or given nonzero opacity when `use_attr_as_opacity` is
                true).
        """

        self.default_blur = blur
        self.default_thresh = threshold
        self.default_masked_opacity = masked_opacity
        self.default_combine_channels = combine_channels

        # TODO(klas): in the future we can allow configuring of tiling settings
        #   by allowing the user to specify the tiler.
        self.tiler = Tiler()

    def __call__(
        self,
        attributions,
        x,
        output_file=None,
        imshow=True,
        fig=None,
        return_tiled=True,
        blur=None,
        threshold=None,
        masked_opacity=None,
        combine_channels=None,
        use_attr_as_opacity=None,
        positive_only=None
    ):
        channel_axis = get_backend().channel_axis
        if attributions.shape != x.shape:
            raise ValueError(
                'Shape of `attributions` {} must match shape of `x` {}'.format(
                    attributions.shape, x.shape
                )
            )

        if blur is None:
            blur = self.default_blur

        if threshold is None:
            threshold = self.default_thresh

        if masked_opacity is None:
            masked_opacity = self.default_masked_opacity

        if combine_channels is None:
            combine_channels = self.default_combine_channels

        if len(attributions.shape) != 4:
            raise ValueError(
                '`MaskVisualizer` is inteded for 4-D image-format data. Given '
                'input with dimension {}'.format(len(attributions.shape))
            )

        if combine_channels is None:
            combine_channels = self.default_combine_channels

        if combine_channels:
            attributions = attributions.mean(axis=channel_axis, keepdims=True)

        if x.shape[channel_axis] not in (1, 3, 4):
            raise ValueError(
                'To visualize, attributions must have either 1, 3, or 4 color '
                'channels, but Visualizer got {} channels.\n'
                'If you are visualizing an internal layer, consider setting '
                '`combine_channels` to True'.format(
                    attributions.shape[channel_axis]
                )
            )

        # Blur the attributions so the explanation is smoother.
        if blur is not None:
            attributions = [gaussian_filter(a, blur) for a in attributions]

        # If `positive_only` clip attributions.
        if positive_only:
            attributions = np.maximum(attributions, 0)

        # Normalize the attributions to be in the range [0, 1].
        attributions = [a - a.min() for a in attributions]
        attributions = [
            0. * a if a.max() == 0. else a / a.max() for a in attributions
        ]

        # Normalize the pixels to be in the range [0, 1]
        x = [xc - xc.min() for xc in x]
        x = np.array([0. * xc if xc.max() == 0. else xc / xc.max() for xc in x])

        # Threshold the attributions to create a mask.
        if threshold is not None:
            percentiles = [
                np.percentile(a, 100 * threshold) for a in attributions
            ]
            masks = np.array(
                [
                    np.maximum(a > p, masked_opacity)
                    for a, p in zip(attributions, percentiles)
                ]
            )

        else:
            masks = np.array(attributions)

        # Use the mask on the original image to visualize the explanation.
        attributions = masks * x
        tiled_attributions = self.tiler.tile(attributions)

        if imshow:
            plt.axis('off')
            plt.imshow(tiled_attributions)

            if output_file:
                plt.savefig(output_file, bbox_inches=0)

        return tiled_attributions if return_tiled else attributions

__init__(blur=5.0, threshold=0.5, masked_opacity=0.2, combine_channels=True, use_attr_as_opacity=False, positive_only=True)

Configures the default parameters for the __call__ method (these can be overridden by passing in values to __call__).

Parameters:

Name Type Description Default
blur

Gives the radius of a Gaussian blur to be applied to the attributions before visualizing. This can be used to help focus on salient regions rather than specific salient pixels.

5.0
threshold

Value in the range [0, 1]. Attribution values at or below the percentile given by threshold (after normalization, blurring, etc.) will be masked.

0.5
masked_opacity

Value in the range [0, 1] specifying the opacity for the parts of the image that are masked.

0.2
combine_channels

If True, the attributions will be averaged across the channel dimension, resulting in a 1-channel attribution map.

True
use_attr_as_opacity

If True, instead of using threshold and masked_opacity, the opacity of each pixel is given by the 0-1-normalized attribution value.

False
positive_only

If True, only pixels with positive attribution will be unmasked (or given nonzero opacity when use_attr_as_opacity is true).

True
Source code in trulens_explain/trulens/visualizations.py
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
def __init__(
    self,
    blur=5.,
    threshold=0.5,
    masked_opacity=0.2,
    combine_channels=True,
    use_attr_as_opacity=False,
    positive_only=True
):
    """
    Configures the default parameters for the `__call__` method (these can 
    be overridden by passing in values to `__call__`).

    Parameters:
        blur:
            Gives the radius of a Gaussian blur to be applied to the 
            attributions before visualizing. This can be used to help focus
            on salient regions rather than specific salient pixels.

        threshold:
            Value in the range [0, 1]. Attribution values at or  below the 
            percentile given by `threshold` (after normalization, blurring,
            etc.) will be masked.

        masked_opacity: 
            Value in the range [0, 1] specifying the opacity for the parts
            of the image that are masked.

        combine_channels:
            If `True`, the attributions will be averaged across the channel
            dimension, resulting in a 1-channel attribution map.

        use_attr_as_opacity:
            If `True`, instead of using `threshold` and `masked_opacity`,
            the opacity of each pixel is given by the 0-1-normalized 
            attribution value.

        positive_only:
            If `True`, only pixels with positive attribution will be 
            unmasked (or given nonzero opacity when `use_attr_as_opacity` is
            true).
    """

    self.default_blur = blur
    self.default_thresh = threshold
    self.default_masked_opacity = masked_opacity
    self.default_combine_channels = combine_channels

    # TODO(klas): in the future we can allow configuring of tiling settings
    #   by allowing the user to specify the tiler.
    self.tiler = Tiler()

NLP

Bases: object

NLP Visualization tools.

Source code in trulens_explain/trulens/visualizations.py
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
class NLP(object):
    """NLP Visualization tools."""

    # Batches of text inputs not yet tokenized.
    TextBatch = TypeVar("TextBatch")

    # Inputs that are directly accepted by wrapped models, tokenized.
    # TODO(piotrm): Reuse other typevars/aliases from elsewhere.
    ModelInput = TypeVar("ModelInput")

    # Outputs produced by wrapped models.
    # TODO(piotrm): Reuse other typevars/aliases from elsewhere.
    ModelOutput = TypeVar("ModelOutput")

    def __init__(
        self,
        wrapper: ModelWrapper,
        output: Optional[Output] = None,
        labels: Optional[Iterable[str]] = None,
        tokenize: Optional[Callable[[TextBatch], ModelInputs]] = None,
        decode: Optional[Callable[[Tensor], str]] = None,
        input_accessor: Optional[Callable[[ModelInputs],
                                          Iterable[Tensor]]] = None,
        output_accessor: Optional[Callable[[ModelOutput],
                                           Iterable[Tensor]]] = None,
        attr_aggregate: Optional[Callable[[Tensor], Tensor]] = None,
        hidden_tokens: Optional[Set[int]] = set()
    ):
        """Initializate NLP visualization tools for a given environment.

        Parameters:
            wrapper: ModelWrapper
                The wrapped model whose channel we're visualizing.

            output: Output, optional
                Visualization output format. Defaults to PlainText unless
                ipython is detected and in which case defaults to IPython
                format.

            labels: Iterable[str], optional
                Names of prediction classes for classification models.

            tokenize: Callable[[TextBatch], ModelInput], optional
                Method to tokenize an instance.

            decode: Callable[[Tensor], str], optional
                Method to invert/decode the tokenization.

            input_accessor: Callable[[ModelInputs], Iterable[Tensor]], optional
                Method to extract input/token ids from model inputs (tokenize
                output) if needed.

            output_accessor: Callable[[ModelOutput], Iterable[Tensor]], optional
                Method to extract outout logits from output structures if
                needed.

            attr_aggregate: Callable[[Tensor], Tensor], optional
                Method to aggregate attribution for embedding into a single
                value. Defaults to sum.

            hidden_tokens: Set[int], optional
                For token-based visualizations, which tokens to hide.
        """
        if output is None:
            try:
                # check if running in interactive python (jupyer, colab, etc) to
                # use appropriate output format
                get_ipython()
                output = IPython()

            except NameError:
                output = PlainText()
                tru_logger(
                    "WARNING: could not guess preferred visualization output format, using PlainText"
                )

        # TODO: automatic inference of various parameters for common repositories like huggingface, tfhub.

        self.output = output
        self.labels = labels
        self.tokenize = tokenize
        self.decode = decode
        self.wrapper = wrapper

        self.input_accessor = input_accessor  # could be inferred
        self.output_accessor = output_accessor  # could be inferred

        B = get_backend()

        if attr_aggregate is None:
            attr_aggregate = B.sum

        self.attr_aggregate = attr_aggregate

        self.hidden_tokens = hidden_tokens

    def token_attribution(self, texts: Iterable[str], attr: AttributionMethod):
        """Visualize a token-based input attribution on given `texts` inputs via the attribution method `attr`.

        Parameters:
            texts: Iterable[str]
                The input texts to visualize.

            attr: AttributionMethod
                The attribution method to generate the token importances with.

        Returns: Any
            The visualization in the format specified by this class's `output` parameter.
        """

        B = get_backend()

        if self.tokenize is None:
            return ValueError("tokenize not provided to NLP visualizer.")

        inputs = self.tokenize(texts)

        outputs = inputs.call_on(self.wrapper._model)
        attrs = inputs.call_on(attr.attributions)

        content = self.output.blank()

        input_ids = inputs
        if self.input_accessor is not None:
            input_ids = self.input_accessor(inputs)

        if (not isinstance(input_ids, Iterable)) or isinstance(input_ids, dict):
            raise ValueError(
                f"Inputs ({input_ids.__class__.__name__}) need to be iterable over instances. You might need to set input_accessor."
            )

        output_logits = outputs
        if self.output_accessor is not None:
            output_logits = self.output_accessor(outputs)

        if (not isinstance(output_logits, Iterable)) or isinstance(
                output_logits, dict):
            raise ValueError(
                f"Outputs ({output_logits.__class__.__name__}) need to be iterable over instances. You might need to set output_accessor."
            )

        for i, (sentence_word_id, attr,
                logits) in enumerate(zip(input_ids, attrs, output_logits)):

            logits = logits.to('cpu').detach().numpy()
            pred = logits.argmax()

            if self.labels is not None:
                pred_name = self.labels[pred]
            else:
                pred_name = str(pred)

            sent = self.output.append(
                self.output.escape(pred_name), ":", self.output.space()
            )

            for word_id, attr in zip(sentence_word_id, attr):
                word_id = int(B.as_array(word_id))

                if word_id in self.hidden_tokens:
                    continue

                if self.decode is not None:
                    word = self.decode(word_id)
                else:
                    word = str(word_id)

                mag = self.attr_aggregate(attr)

                if word[0] == ' ':
                    word = word[1:]
                    sent = self.output.append(sent, self.output.space())

                sent = self.output.append(
                    sent,
                    self.output.magnitude_colored(
                        self.output.escape(word), mag
                    )
                )

            content = self.output.append(
                content, self.output.line(sent), self.output.linebreak(),
                self.output.linebreak()
            )

        return self.output.render(content)

__init__(wrapper, output=None, labels=None, tokenize=None, decode=None, input_accessor=None, output_accessor=None, attr_aggregate=None, hidden_tokens=set())

Initializate NLP visualization tools for a given environment.

Parameters:

Name Type Description Default
wrapper ModelWrapper

ModelWrapper The wrapped model whose channel we're visualizing.

required
output Optional[Output]

Output, optional Visualization output format. Defaults to PlainText unless ipython is detected and in which case defaults to IPython format.

None
labels Optional[Iterable[str]]

Iterable[str], optional Names of prediction classes for classification models.

None
tokenize Optional[Callable[[TextBatch], ModelInputs]]

Callable[[TextBatch], ModelInput], optional Method to tokenize an instance.

None
decode Optional[Callable[[Tensor], str]]

Callable[[Tensor], str], optional Method to invert/decode the tokenization.

None
input_accessor Optional[Callable[[ModelInputs], Iterable[Tensor]]]

Callable[[ModelInputs], Iterable[Tensor]], optional Method to extract input/token ids from model inputs (tokenize output) if needed.

None
output_accessor Optional[Callable[[ModelOutput], Iterable[Tensor]]]

Callable[[ModelOutput], Iterable[Tensor]], optional Method to extract outout logits from output structures if needed.

None
attr_aggregate Optional[Callable[[Tensor], Tensor]]

Callable[[Tensor], Tensor], optional Method to aggregate attribution for embedding into a single value. Defaults to sum.

None
hidden_tokens Optional[Set[int]]

Set[int], optional For token-based visualizations, which tokens to hide.

set()
Source code in trulens_explain/trulens/visualizations.py
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
def __init__(
    self,
    wrapper: ModelWrapper,
    output: Optional[Output] = None,
    labels: Optional[Iterable[str]] = None,
    tokenize: Optional[Callable[[TextBatch], ModelInputs]] = None,
    decode: Optional[Callable[[Tensor], str]] = None,
    input_accessor: Optional[Callable[[ModelInputs],
                                      Iterable[Tensor]]] = None,
    output_accessor: Optional[Callable[[ModelOutput],
                                       Iterable[Tensor]]] = None,
    attr_aggregate: Optional[Callable[[Tensor], Tensor]] = None,
    hidden_tokens: Optional[Set[int]] = set()
):
    """Initializate NLP visualization tools for a given environment.

    Parameters:
        wrapper: ModelWrapper
            The wrapped model whose channel we're visualizing.

        output: Output, optional
            Visualization output format. Defaults to PlainText unless
            ipython is detected and in which case defaults to IPython
            format.

        labels: Iterable[str], optional
            Names of prediction classes for classification models.

        tokenize: Callable[[TextBatch], ModelInput], optional
            Method to tokenize an instance.

        decode: Callable[[Tensor], str], optional
            Method to invert/decode the tokenization.

        input_accessor: Callable[[ModelInputs], Iterable[Tensor]], optional
            Method to extract input/token ids from model inputs (tokenize
            output) if needed.

        output_accessor: Callable[[ModelOutput], Iterable[Tensor]], optional
            Method to extract outout logits from output structures if
            needed.

        attr_aggregate: Callable[[Tensor], Tensor], optional
            Method to aggregate attribution for embedding into a single
            value. Defaults to sum.

        hidden_tokens: Set[int], optional
            For token-based visualizations, which tokens to hide.
    """
    if output is None:
        try:
            # check if running in interactive python (jupyer, colab, etc) to
            # use appropriate output format
            get_ipython()
            output = IPython()

        except NameError:
            output = PlainText()
            tru_logger(
                "WARNING: could not guess preferred visualization output format, using PlainText"
            )

    # TODO: automatic inference of various parameters for common repositories like huggingface, tfhub.

    self.output = output
    self.labels = labels
    self.tokenize = tokenize
    self.decode = decode
    self.wrapper = wrapper

    self.input_accessor = input_accessor  # could be inferred
    self.output_accessor = output_accessor  # could be inferred

    B = get_backend()

    if attr_aggregate is None:
        attr_aggregate = B.sum

    self.attr_aggregate = attr_aggregate

    self.hidden_tokens = hidden_tokens

token_attribution(texts, attr)

Visualize a token-based input attribution on given texts inputs via the attribution method attr.

Parameters:

Name Type Description Default
texts Iterable[str]

Iterable[str] The input texts to visualize.

required
attr AttributionMethod

AttributionMethod The attribution method to generate the token importances with.

required

Any

Type Description

The visualization in the format specified by this class's output parameter.

Source code in trulens_explain/trulens/visualizations.py
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
def token_attribution(self, texts: Iterable[str], attr: AttributionMethod):
    """Visualize a token-based input attribution on given `texts` inputs via the attribution method `attr`.

    Parameters:
        texts: Iterable[str]
            The input texts to visualize.

        attr: AttributionMethod
            The attribution method to generate the token importances with.

    Returns: Any
        The visualization in the format specified by this class's `output` parameter.
    """

    B = get_backend()

    if self.tokenize is None:
        return ValueError("tokenize not provided to NLP visualizer.")

    inputs = self.tokenize(texts)

    outputs = inputs.call_on(self.wrapper._model)
    attrs = inputs.call_on(attr.attributions)

    content = self.output.blank()

    input_ids = inputs
    if self.input_accessor is not None:
        input_ids = self.input_accessor(inputs)

    if (not isinstance(input_ids, Iterable)) or isinstance(input_ids, dict):
        raise ValueError(
            f"Inputs ({input_ids.__class__.__name__}) need to be iterable over instances. You might need to set input_accessor."
        )

    output_logits = outputs
    if self.output_accessor is not None:
        output_logits = self.output_accessor(outputs)

    if (not isinstance(output_logits, Iterable)) or isinstance(
            output_logits, dict):
        raise ValueError(
            f"Outputs ({output_logits.__class__.__name__}) need to be iterable over instances. You might need to set output_accessor."
        )

    for i, (sentence_word_id, attr,
            logits) in enumerate(zip(input_ids, attrs, output_logits)):

        logits = logits.to('cpu').detach().numpy()
        pred = logits.argmax()

        if self.labels is not None:
            pred_name = self.labels[pred]
        else:
            pred_name = str(pred)

        sent = self.output.append(
            self.output.escape(pred_name), ":", self.output.space()
        )

        for word_id, attr in zip(sentence_word_id, attr):
            word_id = int(B.as_array(word_id))

            if word_id in self.hidden_tokens:
                continue

            if self.decode is not None:
                word = self.decode(word_id)
            else:
                word = str(word_id)

            mag = self.attr_aggregate(attr)

            if word[0] == ' ':
                word = word[1:]
                sent = self.output.append(sent, self.output.space())

            sent = self.output.append(
                sent,
                self.output.magnitude_colored(
                    self.output.escape(word), mag
                )
            )

        content = self.output.append(
            content, self.output.line(sent), self.output.linebreak(),
            self.output.linebreak()
        )

    return self.output.render(content)

Output

Bases: ABC

Base class for visualization output formats.

Source code in trulens_explain/trulens/visualizations.py
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
class Output(ABC):
    """Base class for visualization output formats."""

    @abstractmethod
    def blank(self) -> str:
        ...

    @abstractmethod
    def space(self) -> str:
        ...

    @abstractmethod
    def escape(self, s: str) -> str:
        ...

    @abstractmethod
    def line(self, s: str) -> str:
        ...

    @abstractmethod
    def magnitude_colored(self, s: str, mag: float) -> str:
        ...

    @abstractmethod
    def append(self, *parts: Iterable[str]) -> str:
        ...

    @abstractmethod
    def render(self, s: str) -> str:
        ...

PlainText

Bases: Output

Plain text visualization output format.

Source code in trulens_explain/trulens/visualizations.py
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
class PlainText(Output):
    """Plain text visualization output format."""

    def blank(self):
        return ""

    def space(self):
        return " "

    def escape(self, s):
        return s

    def line(self, s):
        return s

    def magnitude_colored(self, s, mag):
        return f"{s}({mag:0.3f})"

    def append(self, *parts):
        return ''.join(parts)

    def render(self, s):
        return s

Tiler

Bases: object

Used to tile batched images or attributions.

Source code in trulens_explain/trulens/visualizations.py
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
class Tiler(object):
    """
    Used to tile batched images or attributions.
    """

    def tile(self, a: np.ndarray) -> np.ndarray:
        """
        Tiles the given array into a grid that is as square as possible.

        Parameters:
            a:
                An array of 4D batched image data.

        Returns:
            A tiled array of the images from `a`. The resulting array has rank
            3 for color images, and 2 for grayscale images (the batch dimension
            is removed, as well as the channel dimension for grayscale images).
            The resulting array has its color channel dimension ordered last to
            fit the requirements of the `matplotlib` library.
        """

        # `pyplot` expects the channels to come last.
        if get_backend().dim_order == 'channels_first':
            a = a.transpose((0, 2, 3, 1))

        n, h, w, c = a.shape

        rows = int(np.sqrt(n))
        cols = int(np.ceil(float(n) / rows))

        new_a = np.zeros((h * rows, w * cols, c))

        for i, x in enumerate(a):
            row = i // cols
            col = i % cols
            new_a[row * h:(row + 1) * h, col * w:(col + 1) * w] = x

        return np.squeeze(new_a)

tile(a)

Tiles the given array into a grid that is as square as possible.

Parameters:

Name Type Description Default
a np.ndarray

An array of 4D batched image data.

required

Returns:

Type Description
np.ndarray

A tiled array of the images from a. The resulting array has rank

np.ndarray

3 for color images, and 2 for grayscale images (the batch dimension

np.ndarray

is removed, as well as the channel dimension for grayscale images).

np.ndarray

The resulting array has its color channel dimension ordered last to

np.ndarray

fit the requirements of the matplotlib library.

Source code in trulens_explain/trulens/visualizations.py
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
def tile(self, a: np.ndarray) -> np.ndarray:
    """
    Tiles the given array into a grid that is as square as possible.

    Parameters:
        a:
            An array of 4D batched image data.

    Returns:
        A tiled array of the images from `a`. The resulting array has rank
        3 for color images, and 2 for grayscale images (the batch dimension
        is removed, as well as the channel dimension for grayscale images).
        The resulting array has its color channel dimension ordered last to
        fit the requirements of the `matplotlib` library.
    """

    # `pyplot` expects the channels to come last.
    if get_backend().dim_order == 'channels_first':
        a = a.transpose((0, 2, 3, 1))

    n, h, w, c = a.shape

    rows = int(np.sqrt(n))
    cols = int(np.ceil(float(n) / rows))

    new_a = np.zeros((h * rows, w * cols, c))

    for i, x in enumerate(a):
        row = i // cols
        col = i % cols
        new_a[row * h:(row + 1) * h, col * w:(col + 1) * w] = x

    return np.squeeze(new_a)

Visualizer

Bases: object

Visualizes attributions directly as a color image. Intended particularly for use with input-attributions.

This can also be used for viewing images (rather than attributions).

Source code in trulens_explain/trulens/visualizations.py
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
class Visualizer(object):
    """
    Visualizes attributions directly as a color image. Intended particularly for
    use with input-attributions.

    This can also be used for viewing images (rather than attributions).
    """

    def __init__(
        self,
        combine_channels: bool = False,
        normalization_type: str = None,
        blur: float = 0.,
        cmap: Colormap = None
    ):
        """
        Configures the default parameters for the `__call__` method (these can 
        be overridden by passing in values to `__call__`).

        Parameters:
            combine_channels:
                If `True`, the attributions will be averaged across the channel
                dimension, resulting in a 1-channel attribution map.

            normalization_type:
                Specifies one of the following configurations for normalizing
                the attributions (each item is normalized separately):

                - `'unsigned_max'`: normalizes the attributions to the range 
                  [-1, 1] by dividing the attributions by the maximum absolute 
                  attribution value.
                - `'unsigned_max_positive_centered'`: same as above, but scales
                  the values to the range [0, 1], with negative scores less than
                  0.5 and positive scores greater than 0.5. 
                - `'magnitude_max'`: takes the absolute value of the 
                  attributions, then normalizes the attributions to the range 
                  [0, 1] by dividing by the maximum absolute attribution value.
                - `'magnitude_sum'`: takes the absolute value of the 
                  attributions, then scales them such that they sum to 1. If 
                  this option is used, each channel is normalized separately, 
                  such that each channel sums to 1.
                - `'signed_max'`: normalizes the attributions to the range 
                  [-1, 1] by dividing the positive values by the maximum 
                  positive attribution value and the negative values by the 
                  minimum negative attribution value.
                - `'signed_max_positive_centered'`: same as above, but scales 
                  the values to the range [0, 1], with negative scores less than
                  0.5 and positive scores greater than 0.5.
                - `'signed_sum'`: scales the positive attributions such that 
                  they sum to 1 and the negative attributions such that they
                  scale to -1. If this option is used, each channel is 
                  normalized separately.
                - `'01'`: normalizes the attributions to the range [0, 1] by 
                  subtracting the minimum attribution value then dividing by the
                  maximum attribution value.
                - `'unnormalized'`: leaves the attributions unaffected.

                If `None`, either `'unsigned_max'` (for single-channel data) or 
                `'unsigned_max_positive_centered'` (for multi-channel data) is
                used.

            blur:
                Gives the radius of a Gaussian blur to be applied to the 
                attributions before visualizing. This can be used to help focus
                on salient regions rather than specific salient pixels.

            cmap: matplotlib.colors.Colormap | str, optional
                Colormap or name of a Colormap to use for the visualization. If 
                `None`, the colormap will be chosen based on the normalization 
                type. This argument is only used for single-channel data
                (including when `combine_channels` is True).
        """
        self.default_combine_channels = combine_channels
        self.default_normalization_type = normalization_type
        self.default_blur = blur
        self.default_cmap = cmap if cmap is not None else self._get_hotcold()

        # TODO(klas): in the future we can allow configuring of tiling settings
        #   by allowing the user to specify the tiler.
        self.tiler = Tiler()

    def __call__(
        self,
        attributions,
        output_file=None,
        imshow=True,
        fig=None,
        return_tiled=False,
        combine_channels=None,
        normalization_type=None,
        blur=None,
        cmap=None
    ) -> np.ndarray:
        """
        Visualizes the given attributions.

        Parameters:
            attributions:
                A `np.ndarray` containing the attributions to be visualized.

            output_file:
                File name to save the visualization image to. If `None`, no
                image will be saved, but the figure can still be displayed.

            imshow:
                If true, a the visualization will be displayed. Otherwise the
                figure will not be displayed, but the figure can still be saved.

            fig:
                The `pyplot` figure to display the visualization in. If `None`,
                a new figure will be created.

            return_tiled:
                If true, the returned array will be in the same shape as the
                visualization, with no batch dimension and the samples in the
                batch tiled along the width and height dimensions. If false, the
                returned array will be reshaped to match `attributions`.

            combine_channels:
                If `True`, the attributions will be averaged across the channel
                dimension, resulting in a 1-channel attribution map. If `None`,
                defaults to the value supplied to the constructor.

            normalization_type:
                Specifies one of the following configurations for normalizing
                the attributions (each item is normalized separately):

                - `'unsigned_max'`: normalizes the attributions to the range 
                  [-1, 1] by dividing the attributions by the maximum absolute 
                  attribution value.
                - `'unsigned_max_positive_centered'`: same as above, but scales
                  the values to the range [0, 1], with negative scores less than
                  0.5 and positive scores greater than 0.5. 
                - `'magnitude_max'`: takes the absolute value of the 
                  attributions, then normalizes the attributions to the range 
                  [0, 1] by dividing by the maximum absolute attribution value.
                - `'magnitude_sum'`: takes the absolute value of the 
                  attributions, then scales them such that they sum to 1. If 
                  this option is used, each channel is normalized separately, 
                  such that each channel sums to 1.
                - `'signed_max'`: normalizes the attributions to the range 
                  [-1, 1] by dividing the positive values by the maximum 
                  positive attribution value and the negative values by the 
                  minimum negative attribution value.
                - `'signed_max_positive_centered'`: same as above, but scales 
                  the values to the range [0, 1], with negative scores less than
                  0.5 and positive scores greater than 0.5.
                - `'signed_sum'`: scales the positive attributions such that 
                  they sum to 1 and the negative attributions such that they
                  scale to -1. If this option is used, each channel is 
                  normalized separately.
                - `'01'`: normalizes the attributions to the range [0, 1] by 
                  subtracting the minimum attribution value then dividing by the
                  maximum attribution value.
                - `'unnormalized'`: leaves the attributions unaffected.

                If `None`, defaults to the value supplied to the constructor.

            blur:
                Gives the radius of a Gaussian blur to be applied to the 
                attributions before visualizing. This can be used to help focus
                on salient regions rather than specific salient pixels. If
                `None`, defaults to the value supplied to the constructor.

            cmap: matplotlib.colors.Colormap | str, optional
                Colormap or name of a Colormap to use for the visualization. If
                `None`, defaults to the value supplied to the constructor.

        Returns:
            A `np.ndarray` array of the numerical representation of the
            attributions as modified for the visualization. This includes 
            normalization, blurring, etc.
        """
        combine_channels, normalization_type, blur, cmap = self._check_args(
            attributions, combine_channels, normalization_type, blur, cmap
        )

        # Combine the channels if specified.
        if combine_channels:
            attributions = attributions.mean(
                axis=get_backend().channel_axis, keepdims=True
            )

        # Blur the attributions so the explanation is smoother.
        if blur:
            attributions = self._blur(attributions, blur)

        # Normalize the attributions.
        attributions = self._normalize(attributions, normalization_type)

        tiled_attributions = self.tiler.tile(attributions)

        # Display the figure:
        _fig = plt.figure() if fig is None else fig

        plt.axis('off')
        plt.imshow(tiled_attributions, cmap=cmap)

        if output_file:
            plt.savefig(output_file, bbox_inches=0)

        if imshow:
            plt.show()

        elif fig is None:
            plt.close(_fig)

        return tiled_attributions if return_tiled else attributions

    def _check_args(
        self, attributions, combine_channels, normalization_type, blur, cmap
    ):
        """
        Validates the arguments, and sets them to their default values if they
        are not specified.
        """
        if attributions.ndim != 4:
            raise ValueError(
                '`Visualizer` is inteded for 4-D image-format data. Given '
                'input with dimension {}'.format(attributions.ndim)
            )

        if combine_channels is None:
            combine_channels = self.default_combine_channels

        channel_axis = get_backend().channel_axis
        if not (attributions.shape[channel_axis] in (1, 3, 4) or
                combine_channels):

            raise ValueError(
                'To visualize, attributions must have either 1, 3, or 4 color '
                'channels, but `Visualizer` got {} channels.\n'
                'If you are visualizing an internal layer, consider setting '
                '`combine_channels` to True'.format(
                    attributions.shape[channel_axis]
                )
            )

        if normalization_type is None:
            normalization_type = self.default_normalization_type

            if normalization_type is None:
                if combine_channels or attributions.shape[channel_axis] == 1:
                    normalization_type = 'unsigned_max'

                else:
                    normalization_type = 'unsigned_max_positive_centered'

        valid_normalization_types = [
            'unsigned_max',
            'unsigned_max_positive_centered',
            'magnitude_max',
            'magnitude_sum',
            'signed_max',
            'signed_max_positive_centered',
            'signed_sum',
            '01',
            'unnormalized',
        ]
        if normalization_type not in valid_normalization_types:
            raise ValueError(
                '`norm` must be None or one of the following options:' +
                ','.join(
                    [
                        '\'{}\''.form(norm_type)
                        for norm_type in valid_normalization_types
                    ]
                )
            )

        if blur is None:
            blur = self.default_blur

        if cmap is None:
            cmap = self.default_cmap

        return combine_channels, normalization_type, blur, cmap

    def _normalize(self, attributions, normalization_type, eps=1e-20):
        channel_axis = get_backend().channel_axis
        if normalization_type == 'unnormalized':
            return attributions

        split_by_channel = normalization_type.endswith('sum')

        channel_split = [attributions] if split_by_channel else np.split(
            attributions, attributions.shape[channel_axis], axis=channel_axis
        )

        normalized_attributions = []
        for c_map in channel_split:
            if normalization_type == 'magnitude_max':
                c_map = np.abs(c_map) / (
                    np.abs(c_map).max(axis=(1, 2, 3), keepdims=True) + eps
                )

            elif normalization_type == 'magnitude_sum':
                c_map = np.abs(c_map) / (
                    np.abs(c_map).sum(axis=(1, 2, 3), keepdims=True) + eps
                )

            elif normalization_type.startswith('signed_max'):
                postive_max = c_map.max(axis=(1, 2, 3), keepdims=True)
                negative_max = (-c_map).max(axis=(1, 2, 3), keepdims=True)

                # Normalize the postive socres to [0, 1] and negative socresn to
                # [-1, 0].
                normalization_factor = np.where(
                    c_map >= 0, postive_max, negative_max
                )
                c_map = c_map / (normalization_factor + eps)

                # If positive-centered, normalize so that all scores are in the
                # range [0, 1], with negative scores less than 0.5 and positive
                # scores greater than 0.5.
                if normalization_type.endswith('positive_centered'):
                    c_map = c_map / 2. + 0.5

            elif normalization_type == 'signed_sum':
                postive_max = np.maximum(c_map, 0).sum(
                    axis=(1, 2, 3), keepdims=True
                )
                negative_max = np.maximum(-c_map, 0).sum(
                    axis=(1, 2, 3), keepdims=True
                )

                # Normalize the postive socres to ensure they sum to 1 and the
                # negative scores to ensure they sum to -1.
                normalization_factor = np.where(
                    c_map >= 0, postive_max, negative_max
                )
                c_map = c_map / (normalization_factor + eps)

            elif normalization_type.startswith('unsigned_max'):
                c_map = c_map / (
                    np.abs(c_map).max(axis=(1, 2, 3), keepdims=True) + eps
                )

                # If positive-centered, normalize so that all scores are in the
                # range [0, 1], with negative scores less than 0.5 and positive
                # scores greater than 0.5.
                if normalization_type.endswith('positive_centered'):
                    c_map = c_map / 2. + 0.5

            elif normalization_type == '01':
                c_map = c_map - c_map.min(axis=(1, 2, 3), keepdims=True)
                c_map = c_map / (c_map.max(axis=(1, 2, 3), keepdims=True) + eps)

            normalized_attributions.append(c_map)

        return np.concatenate(normalized_attributions, axis=channel_axis)

    def _blur(self, attributions, blur):
        for i in range(attributions.shape[0]):
            attributions[i] = gaussian_filter(attributions[i], blur)

        return attributions

    def _get_hotcold(self):
        hot = cm.get_cmap('hot', 128)
        cool = cm.get_cmap('cool', 128)
        binary = cm.get_cmap('binary', 128)
        hotcold = np.vstack(
            (
                binary(np.linspace(0, 1, 128)) * cool(np.linspace(0, 1, 128)),
                hot(np.linspace(0, 1, 128))
            )
        )

        return ListedColormap(hotcold, name='hotcold')

__call__(attributions, output_file=None, imshow=True, fig=None, return_tiled=False, combine_channels=None, normalization_type=None, blur=None, cmap=None)

Visualizes the given attributions.

Parameters:

Name Type Description Default
attributions

A np.ndarray containing the attributions to be visualized.

required
output_file

File name to save the visualization image to. If None, no image will be saved, but the figure can still be displayed.

None
imshow

If true, a the visualization will be displayed. Otherwise the figure will not be displayed, but the figure can still be saved.

True
fig

The pyplot figure to display the visualization in. If None, a new figure will be created.

None
return_tiled

If true, the returned array will be in the same shape as the visualization, with no batch dimension and the samples in the batch tiled along the width and height dimensions. If false, the returned array will be reshaped to match attributions.

False
combine_channels

If True, the attributions will be averaged across the channel dimension, resulting in a 1-channel attribution map. If None, defaults to the value supplied to the constructor.

None
normalization_type

Specifies one of the following configurations for normalizing the attributions (each item is normalized separately):

  • 'unsigned_max': normalizes the attributions to the range [-1, 1] by dividing the attributions by the maximum absolute attribution value.
  • 'unsigned_max_positive_centered': same as above, but scales the values to the range [0, 1], with negative scores less than 0.5 and positive scores greater than 0.5.
  • 'magnitude_max': takes the absolute value of the attributions, then normalizes the attributions to the range [0, 1] by dividing by the maximum absolute attribution value.
  • 'magnitude_sum': takes the absolute value of the attributions, then scales them such that they sum to 1. If this option is used, each channel is normalized separately, such that each channel sums to 1.
  • 'signed_max': normalizes the attributions to the range [-1, 1] by dividing the positive values by the maximum positive attribution value and the negative values by the minimum negative attribution value.
  • 'signed_max_positive_centered': same as above, but scales the values to the range [0, 1], with negative scores less than 0.5 and positive scores greater than 0.5.
  • 'signed_sum': scales the positive attributions such that they sum to 1 and the negative attributions such that they scale to -1. If this option is used, each channel is normalized separately.
  • '01': normalizes the attributions to the range [0, 1] by subtracting the minimum attribution value then dividing by the maximum attribution value.
  • 'unnormalized': leaves the attributions unaffected.

If None, defaults to the value supplied to the constructor.

None
blur

Gives the radius of a Gaussian blur to be applied to the attributions before visualizing. This can be used to help focus on salient regions rather than specific salient pixels. If None, defaults to the value supplied to the constructor.

None
cmap

matplotlib.colors.Colormap | str, optional Colormap or name of a Colormap to use for the visualization. If None, defaults to the value supplied to the constructor.

None

Returns:

Type Description
np.ndarray

A np.ndarray array of the numerical representation of the

np.ndarray

attributions as modified for the visualization. This includes

np.ndarray

normalization, blurring, etc.

Source code in trulens_explain/trulens/visualizations.py
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
def __call__(
    self,
    attributions,
    output_file=None,
    imshow=True,
    fig=None,
    return_tiled=False,
    combine_channels=None,
    normalization_type=None,
    blur=None,
    cmap=None
) -> np.ndarray:
    """
    Visualizes the given attributions.

    Parameters:
        attributions:
            A `np.ndarray` containing the attributions to be visualized.

        output_file:
            File name to save the visualization image to. If `None`, no
            image will be saved, but the figure can still be displayed.

        imshow:
            If true, a the visualization will be displayed. Otherwise the
            figure will not be displayed, but the figure can still be saved.

        fig:
            The `pyplot` figure to display the visualization in. If `None`,
            a new figure will be created.

        return_tiled:
            If true, the returned array will be in the same shape as the
            visualization, with no batch dimension and the samples in the
            batch tiled along the width and height dimensions. If false, the
            returned array will be reshaped to match `attributions`.

        combine_channels:
            If `True`, the attributions will be averaged across the channel
            dimension, resulting in a 1-channel attribution map. If `None`,
            defaults to the value supplied to the constructor.

        normalization_type:
            Specifies one of the following configurations for normalizing
            the attributions (each item is normalized separately):

            - `'unsigned_max'`: normalizes the attributions to the range 
              [-1, 1] by dividing the attributions by the maximum absolute 
              attribution value.
            - `'unsigned_max_positive_centered'`: same as above, but scales
              the values to the range [0, 1], with negative scores less than
              0.5 and positive scores greater than 0.5. 
            - `'magnitude_max'`: takes the absolute value of the 
              attributions, then normalizes the attributions to the range 
              [0, 1] by dividing by the maximum absolute attribution value.
            - `'magnitude_sum'`: takes the absolute value of the 
              attributions, then scales them such that they sum to 1. If 
              this option is used, each channel is normalized separately, 
              such that each channel sums to 1.
            - `'signed_max'`: normalizes the attributions to the range 
              [-1, 1] by dividing the positive values by the maximum 
              positive attribution value and the negative values by the 
              minimum negative attribution value.
            - `'signed_max_positive_centered'`: same as above, but scales 
              the values to the range [0, 1], with negative scores less than
              0.5 and positive scores greater than 0.5.
            - `'signed_sum'`: scales the positive attributions such that 
              they sum to 1 and the negative attributions such that they
              scale to -1. If this option is used, each channel is 
              normalized separately.
            - `'01'`: normalizes the attributions to the range [0, 1] by 
              subtracting the minimum attribution value then dividing by the
              maximum attribution value.
            - `'unnormalized'`: leaves the attributions unaffected.

            If `None`, defaults to the value supplied to the constructor.

        blur:
            Gives the radius of a Gaussian blur to be applied to the 
            attributions before visualizing. This can be used to help focus
            on salient regions rather than specific salient pixels. If
            `None`, defaults to the value supplied to the constructor.

        cmap: matplotlib.colors.Colormap | str, optional
            Colormap or name of a Colormap to use for the visualization. If
            `None`, defaults to the value supplied to the constructor.

    Returns:
        A `np.ndarray` array of the numerical representation of the
        attributions as modified for the visualization. This includes 
        normalization, blurring, etc.
    """
    combine_channels, normalization_type, blur, cmap = self._check_args(
        attributions, combine_channels, normalization_type, blur, cmap
    )

    # Combine the channels if specified.
    if combine_channels:
        attributions = attributions.mean(
            axis=get_backend().channel_axis, keepdims=True
        )

    # Blur the attributions so the explanation is smoother.
    if blur:
        attributions = self._blur(attributions, blur)

    # Normalize the attributions.
    attributions = self._normalize(attributions, normalization_type)

    tiled_attributions = self.tiler.tile(attributions)

    # Display the figure:
    _fig = plt.figure() if fig is None else fig

    plt.axis('off')
    plt.imshow(tiled_attributions, cmap=cmap)

    if output_file:
        plt.savefig(output_file, bbox_inches=0)

    if imshow:
        plt.show()

    elif fig is None:
        plt.close(_fig)

    return tiled_attributions if return_tiled else attributions

__init__(combine_channels=False, normalization_type=None, blur=0.0, cmap=None)

Configures the default parameters for the __call__ method (these can be overridden by passing in values to __call__).

Parameters:

Name Type Description Default
combine_channels bool

If True, the attributions will be averaged across the channel dimension, resulting in a 1-channel attribution map.

False
normalization_type str

Specifies one of the following configurations for normalizing the attributions (each item is normalized separately):

  • 'unsigned_max': normalizes the attributions to the range [-1, 1] by dividing the attributions by the maximum absolute attribution value.
  • 'unsigned_max_positive_centered': same as above, but scales the values to the range [0, 1], with negative scores less than 0.5 and positive scores greater than 0.5.
  • 'magnitude_max': takes the absolute value of the attributions, then normalizes the attributions to the range [0, 1] by dividing by the maximum absolute attribution value.
  • 'magnitude_sum': takes the absolute value of the attributions, then scales them such that they sum to 1. If this option is used, each channel is normalized separately, such that each channel sums to 1.
  • 'signed_max': normalizes the attributions to the range [-1, 1] by dividing the positive values by the maximum positive attribution value and the negative values by the minimum negative attribution value.
  • 'signed_max_positive_centered': same as above, but scales the values to the range [0, 1], with negative scores less than 0.5 and positive scores greater than 0.5.
  • 'signed_sum': scales the positive attributions such that they sum to 1 and the negative attributions such that they scale to -1. If this option is used, each channel is normalized separately.
  • '01': normalizes the attributions to the range [0, 1] by subtracting the minimum attribution value then dividing by the maximum attribution value.
  • 'unnormalized': leaves the attributions unaffected.

If None, either 'unsigned_max' (for single-channel data) or 'unsigned_max_positive_centered' (for multi-channel data) is used.

None
blur float

Gives the radius of a Gaussian blur to be applied to the attributions before visualizing. This can be used to help focus on salient regions rather than specific salient pixels.

0.0
cmap Colormap

matplotlib.colors.Colormap | str, optional Colormap or name of a Colormap to use for the visualization. If None, the colormap will be chosen based on the normalization type. This argument is only used for single-channel data (including when combine_channels is True).

None
Source code in trulens_explain/trulens/visualizations.py
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
def __init__(
    self,
    combine_channels: bool = False,
    normalization_type: str = None,
    blur: float = 0.,
    cmap: Colormap = None
):
    """
    Configures the default parameters for the `__call__` method (these can 
    be overridden by passing in values to `__call__`).

    Parameters:
        combine_channels:
            If `True`, the attributions will be averaged across the channel
            dimension, resulting in a 1-channel attribution map.

        normalization_type:
            Specifies one of the following configurations for normalizing
            the attributions (each item is normalized separately):

            - `'unsigned_max'`: normalizes the attributions to the range 
              [-1, 1] by dividing the attributions by the maximum absolute 
              attribution value.
            - `'unsigned_max_positive_centered'`: same as above, but scales
              the values to the range [0, 1], with negative scores less than
              0.5 and positive scores greater than 0.5. 
            - `'magnitude_max'`: takes the absolute value of the 
              attributions, then normalizes the attributions to the range 
              [0, 1] by dividing by the maximum absolute attribution value.
            - `'magnitude_sum'`: takes the absolute value of the 
              attributions, then scales them such that they sum to 1. If 
              this option is used, each channel is normalized separately, 
              such that each channel sums to 1.
            - `'signed_max'`: normalizes the attributions to the range 
              [-1, 1] by dividing the positive values by the maximum 
              positive attribution value and the negative values by the 
              minimum negative attribution value.
            - `'signed_max_positive_centered'`: same as above, but scales 
              the values to the range [0, 1], with negative scores less than
              0.5 and positive scores greater than 0.5.
            - `'signed_sum'`: scales the positive attributions such that 
              they sum to 1 and the negative attributions such that they
              scale to -1. If this option is used, each channel is 
              normalized separately.
            - `'01'`: normalizes the attributions to the range [0, 1] by 
              subtracting the minimum attribution value then dividing by the
              maximum attribution value.
            - `'unnormalized'`: leaves the attributions unaffected.

            If `None`, either `'unsigned_max'` (for single-channel data) or 
            `'unsigned_max_positive_centered'` (for multi-channel data) is
            used.

        blur:
            Gives the radius of a Gaussian blur to be applied to the 
            attributions before visualizing. This can be used to help focus
            on salient regions rather than specific salient pixels.

        cmap: matplotlib.colors.Colormap | str, optional
            Colormap or name of a Colormap to use for the visualization. If 
            `None`, the colormap will be chosen based on the normalization 
            type. This argument is only used for single-channel data
            (including when `combine_channels` is True).
    """
    self.default_combine_channels = combine_channels
    self.default_normalization_type = normalization_type
    self.default_blur = blur
    self.default_cmap = cmap if cmap is not None else self._get_hotcold()

    # TODO(klas): in the future we can allow configuring of tiling settings
    #   by allowing the user to specify the tiler.
    self.tiler = Tiler()