Credit Risk Case Study
One Model, Many Explanations
The tutorial notebook in your repository is a nice example of how explainability is used in practice. Instead of jumping between unrelated toy datasets, it keeps one credit-risk model and studies it through several lenses.
The pipeline is straightforward:
- load the German credit dataset,
- encode categorical variables,
- split train and test sets,
- fit a LightGBM classifier,
- explain the fitted model globally and locally.
That structure is worth remembering because it reflects how explainability is usually used on real projects: after we already have a working predictive model.
Start with Global Importance
The notebook first computes permutation importance on the held-out data and then compares it with LOFO retraining.
These two views answer a similar question, but they do not measure importance in exactly the same way. Looking at both is often more informative than trusting a single ranking.
Then Inspect Feature Effects
A ranking alone does not tell us how a feature changes the prediction. That is why the notebook next draws ICE and PDP views.
This is a good workflow lesson:
- feature importance tells us what matters,
- ICE and PDP tell us how it matters.
Finally Zoom In on Individuals
The same notebook also moves to local explanations with LIME and SHAP.
- LIME produces a sparse local explanation around one selected person.
- SHAP decomposes the prediction into a base value plus feature contributions.
At this point we are no longer asking how the model behaves on average. We are asking why one applicant received one score.
A Practical XAI Workflow
This case study suggests a simple and useful order of operations:
- evaluate the predictive model first,
- inspect global importance,
- study feature-response shape,
- investigate suspicious or high-impact cases locally,
- compare explanation methods rather than relying on one alone.
That is much closer to real practice than treating explainability as a single plot or a single package call.
Summary
In this lesson we covered:
- A full credit-risk explainability workflow from model fitting to diagnosis
- Global ranking tools with permutation importance and LOFO
- Response-shape analysis with ICE and PDP
- Individual-level explanation with LIME and SHAP
Next: This completes the explainability course and gives you a practical toolkit for auditing model behavior.