linkedin facebook twitter rss

01 Jan explanation utility

Question MarkIn AI, explanation utilities provide details about a model’s output such as the lineage of the data used, salient features in the model or the specific inference rules fired to derive the results. With processes that are too complex or untraceable to support detailed explanation, such as neural network-based classification, the size or nature of the training set may be the closest corrollary. Knowing how much each feature or element in the model contributed to the result can help find gaps in training or errors in the model, or manually train weights of contributing elements. Explanations can also be used to recognize bias in the model and verify that the model is behaving as expected.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.