Data Science Library and Resources
Statistics
Scipy
Machine Learning
XAI (Explainable AI) methods
These methods help in understanding and interpreting machine learning models, making them more transparent and trustworthy.
SHAP (SHapley Additive exPlanations)
- SHAP values are a unified measure of feature importance based on cooperative game theory.
- They provide consistency and local accuracy, making it possible to explain individual predictions.
LIME (Local Interpretable Model-agnostic Explanations):
- LIME explains individual predictions by locally approximating the model around the prediction.
- It generates interpretable models (e.g., linear models) to understand the behavior of complex models.
Integrated Gradients:
- A method for attributing the prediction of a deep network to its input features.
- It works by integrating the gradients of the model’s output with respect to the input along a straight path from a baseline to the input.
Other:
- Partial Dependence Plots (PDPs): Show the relationship between a feature and the predicted outcome while keeping other features constant.
- Permutation Feature Importance: Measures the change in model performance when a feature’s values are randomly shuffled.
- Counterfactual Explanations: Identify the minimal changes needed to a data point to change its prediction.
Deep Learning
CNN
Natural Language Processing
Scapy NLKT
GenAI
Transformer
RAG
Langchain Langsmith
Lammaindex
DSPy
##
##
This post is licensed under CC BY 4.0 by the author.