oai:arXiv.org:2407.05656
Computer Science
2024
10.07.2024
The extreme multi-label classification~(XMC) task involves learning a classifier that can predict from a large label set the most relevant subset of labels for a data instance.
While deep neural networks~(DNNs) have demonstrated remarkable success in XMC problems, the task is still challenging because it must deal with a large number of output labels, which make the DNN training computationally expensive.
This paper addresses the issue by exploring the use of random circular vectors, where each vector component is represented as a complex amplitude.
In our framework, we can develop an output layer and loss function of DNNs for XMC by representing the final output layer as a fully connected layer that directly predicts a low-dimensional circular vector encoding a set of labels for a data instance.
We conducted experiments on synthetic datasets to verify that circular vectors have better label encoding capacity and retrieval ability than normal real-valued vectors.
Then, we conducted experiments on actual XMC datasets and found that these appealing properties of circular vectors contribute to significant improvements in task performance compared with a previous model using random real-valued vectors, while reducing the size of the output layers by up to 99%.
;Comment: 11 pages, 6 figures, 3 tables; accepted to workshop RepL4NLP held in conjunction with ACL 2024
Nishida, Ken,Machi, Kojiro,Onishi, Kazuma,Hayashi, Katsuhiko,Kamigaito, Hidetaka, 2024, Multi-label Learning with Random Circular Vectors