TY - GEN
T1 - F2VAE
T2 - ACM/SIGAPP Symposium on Applied Computing
AU - Borges, Rodrigo
AU - Stefanidis, Kostas
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/4/25
Y1 - 2022/4/25
N2 - Recommendation algorithms are widely used nowadays, especially in scenarios of information overload (i.e., when users have too many options to choose from), due to their ability to suggest potentially relevant items to users in a personalized fashion. Users, nevertheless, might be considered as separated in groups according to sensitive attributes, such as age, gender or nationality, and the recommendation process might be biased towards one of these groups. If observed, this bias has to be mitigated actively, or it can propagate and be amplified over time. Here, we consider a relevant difference of recommendation quality among groups as unfair, and we argue that this difference should be maintained as low as possible. We propose a framework named F2VAE for mitigating user-oriented unfairness in recommender systems. The framework is based on Variational Autoencoders (VAE) and it introduces two extra terms in VAE's standard loss function, one associated to fair representation and another one associated to fair recommendation. The conflicting objectives associated to these terms are discussed in details in a series of experiments considering the bias associated to the users' nationality in a music consumption dataset. We recall recent works proposed for generating fair representations in the context of classification, and we adapt one of these methods to the recommendation task. F2VAE was able to increase the precision by approximately 1% while reducing the unfairness by 21% when compared to standard VAE.
AB - Recommendation algorithms are widely used nowadays, especially in scenarios of information overload (i.e., when users have too many options to choose from), due to their ability to suggest potentially relevant items to users in a personalized fashion. Users, nevertheless, might be considered as separated in groups according to sensitive attributes, such as age, gender or nationality, and the recommendation process might be biased towards one of these groups. If observed, this bias has to be mitigated actively, or it can propagate and be amplified over time. Here, we consider a relevant difference of recommendation quality among groups as unfair, and we argue that this difference should be maintained as low as possible. We propose a framework named F2VAE for mitigating user-oriented unfairness in recommender systems. The framework is based on Variational Autoencoders (VAE) and it introduces two extra terms in VAE's standard loss function, one associated to fair representation and another one associated to fair recommendation. The conflicting objectives associated to these terms are discussed in details in a series of experiments considering the bias associated to the users' nationality in a music consumption dataset. We recall recent works proposed for generating fair representations in the context of classification, and we adapt one of these methods to the recommendation task. F2VAE was able to increase the precision by approximately 1% while reducing the unfairness by 21% when compared to standard VAE.
KW - bias
KW - fair representation
KW - fairness
KW - recommender systems
KW - user fairness
U2 - 10.1145/3477314.3507152
DO - 10.1145/3477314.3507152
M3 - Conference contribution
AN - SCOPUS:85130411749
T3 - Proceedings of the ACM Symposium on Applied Computing
SP - 1391
EP - 1398
BT - Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing, SAC 2022
PB - ACM
Y2 - 25 April 2022 through 29 April 2022
ER -