1(Department of Computer Science and Technology,College of Electronic and Information Engineering,Tongji University,Shanghai 201804,China)2(Key Laboratory of Embedded System and Service Computing,Ministry of Education,Tongji University,Shanghai 201804,China)3(Key Laboratory of Watershed Ecology and Geographical Environment Monitoring,NASG,Jinggangshan University,Ji'an 343009,China)4(Department of Computer Science and Technology,College of Electronic and Information Engineering,Jinggangshan University,Ji'an 343009,China)
Abstract:Robustness is becoming more and more important in learning to rank,while the most existing learning to rank algorithms are only concerned with the improvement of the ranking model's effectiveness,the robustness of the ranking model is often overlooked.In order to enhance the robustness of the ranking model,the effectiveness and robustness can be jointly considered in the process of training the ranking model.We study how to optimize the tradeoff of effectiveness and robustness for the LambdaMART learning to rank algorithm from a novel perspective,i.e.,the bias-variance tradeoff in this paper.We integrate bias and variance into a unified objective function to modify the gradient of LambdaMART algorithm,and prove that the modified gradient can be still optimized by using the LambdaMART algorithm to train the ranking model.Finally,experimental results on learning to rank datasets indicate that the modified LambdaMART algorithm based on the idea of the bias-variance tradeoff has a stronger robustness.