Credit Risk Case Study

The tutorial notebook in your repository is a nice example of how explainability is used in practice. Instead of jumping between unrelated toy datasets, it keeps one credit-risk model and studies it through several lenses.

The pipeline is straightforward:

  1. load the German credit dataset,
  2. encode categorical variables,
  3. split train and test sets,
  4. fit a LightGBM classifier,
  5. explain the fitted model globally and locally.

That structure is worth remembering because it reflects how explainability is usually used on real projects: after we already have a working predictive model.

The notebook first computes permutation importance on the held-out data and then compares it with LOFO retraining.

These two views answer a similar question, but they do not measure importance in exactly the same way. Looking at both is often more informative than trusting a single ranking.

A ranking alone does not tell us how a feature changes the prediction. That is why the notebook next draws ICE and PDP views.

This is a good workflow lesson:

  • feature importance tells us what matters,
  • ICE and PDP tell us how it matters.

The same notebook also moves to local explanations with LIME and SHAP.

  • LIME produces a sparse local explanation around one selected person.
  • SHAP decomposes the prediction into a base value plus feature contributions.

At this point we are no longer asking how the model behaves on average. We are asking why one applicant received one score.

This case study suggests a simple and useful order of operations:

  1. evaluate the predictive model first,
  2. inspect global importance,
  3. study feature-response shape,
  4. investigate suspicious or high-impact cases locally,
  5. compare explanation methods rather than relying on one alone.

That is much closer to real practice than treating explainability as a single plot or a single package call.

In this lesson we covered:

  1. A full credit-risk explainability workflow from model fitting to diagnosis
  2. Global ranking tools with permutation importance and LOFO
  3. Response-shape analysis with ICE and PDP
  4. Individual-level explanation with LIME and SHAP

Next: This completes the explainability course and gives you a practical toolkit for auditing model behavior.