Description:
Smaller, task-specific models can be created using knowledge distillation from large language models (LLMs).
The goal of the project is to:
(1) Evaluate whether fairness is preserved during distillation of LLMs for different distillation techniques used in the literature.
(2) Explore ways in which fairness can be preserved during distillation.
Links:
- Bias and Fairness in Large Language Models: A Survey
- A Survey on Knowledge Distillation of Large Language Models