IMR OpenIR
A novel cross-modal hashing algorithm based on multimodal deep learning
Alternative TitleA novel cross-modal hashing algorithm based on multimodal deep learning
Qu Wen1; Wang Daling1; Feng Shi1; Zhang Yifei1; Yu Ge1
2017
Source PublicationSCIENCE CHINA-INFORMATION SCIENCES
ISSN1674-733X
Volume60Issue:9
AbstractWith the growing popularity of multimodal data on the Web, cross-modal retrieval on large-scale multimedia databases has become an important research topic. Cross-modal retrieval methods based on hashing assume that there is a latent space shared by multimodal features. To model the relationship among heterogeneous data, most existing methods embed the data into a joint abstraction space by linear projections. However, these approaches are sensitive to noise in the data and are unable to make use of unlabeled data and multimodal data with missing values in real-world applications. To address these challenges, we proposed a novel multimodal deep-learning-based hash (MDLH) algorithm. In particular, MDLH uses a deep neural network to encode heterogeneous features into a compact common representation and learns the hash functions based on the common representation. The parameters of the whole model are fine-tuned in a supervised training stage. Experiments on two standard datasets show that the method achieves more effective results than other methods in cross-modal retrieval.
Other AbstractWith the growing popularity of multimodal data on the Web, cross-modal retrieval on large-scale multimedia databases has become an important research topic. Cross-modal retrieval methods based on hashing assume that there is a latent space shared by multimodal features. To model the relationship among heterogeneous data, most existing methods embed the data into a joint abstraction space by linear projections. However, these approaches are sensitive to noise in the data and are unable to make use of unlabeled data and multimodal data with missing values in real-world applications. To address these challenges, we proposed a novel multimodal deep-learning-based hash (MDLH) algorithm. In particular, MDLH uses a deep neural network to encode heterogeneous features into a compact common representation and learns the hash functions based on the common representation. The parameters of the whole model are fine-tuned in a supervised training stage. Experiments on two standard datasets show that the method achieves more effective results than other methods in cross-modal retrieval.
Keywordhashing cross-modal retrieval cross-modal hashing multimodal data analysis deep learning
Indexed ByCSCD
Language英语
Funding Project[National Natural Science Foundation of China] ; [Fundamental Research Funds for the Central Universities of China]
CSCD IDCSCD:6087845
Citation statistics
Cited Times:5[CSCD]   [CSCD Record]
Document Type期刊论文
Identifierhttp://ir.imr.ac.cn/handle/321006/151634
Collection中国科学院金属研究所
Affiliation1.东北大学
2.中国科学院金属研究所
Recommended Citation
GB/T 7714
Qu Wen,Wang Daling,Feng Shi,et al. A novel cross-modal hashing algorithm based on multimodal deep learning[J]. SCIENCE CHINA-INFORMATION SCIENCES,2017,60(9).
APA Qu Wen,Wang Daling,Feng Shi,Zhang Yifei,&Yu Ge.(2017).A novel cross-modal hashing algorithm based on multimodal deep learning.SCIENCE CHINA-INFORMATION SCIENCES,60(9).
MLA Qu Wen,et al."A novel cross-modal hashing algorithm based on multimodal deep learning".SCIENCE CHINA-INFORMATION SCIENCES 60.9(2017).
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Qu Wen]'s Articles
[Wang Daling]'s Articles
[Feng Shi]'s Articles
Baidu academic
Similar articles in Baidu academic
[Qu Wen]'s Articles
[Wang Daling]'s Articles
[Feng Shi]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Qu Wen]'s Articles
[Wang Daling]'s Articles
[Feng Shi]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.