# American Institute of Mathematical Sciences

January  2017, 2(1): 69-75. doi: 10.3934/bdia.2017009

## Multiple-instance learning for text categorization based on semantic representation

 National Key Laboratory for Novel Software Technology, Nanjing University, China

Published  September 2017

Text categorization is the fundamental bricks of other related researches in NLP. Up to now, researchers have proposed many effective text categorization methods and gained well performance. However, these methods are generally based on the raw features or low level features, e.g., tf or tfidf, while neglecting the semantic structures between words. Complex semantic information can influence the precision of text categorization. In this paper, we propose a new method to handle the semantic correlations between different words and text features from the representations and the learning schemes. We represent the document as multiple instances based on word2vec. Experiments validate the effectiveness of proposed method compared with those state-of-the-art text categorization methods.

Citation: Jian-Bing Zhang, Yi-Xin Sun, De-Chuan Zhan. Multiple-instance learning for text categorization based on semantic representation. Big Data & Information Analytics, 2017, 2 (1) : 69-75. doi: 10.3934/bdia.2017009
##### References:
 [1] J. Amores, Multiple instance classification: Review, taxonomy and comparative study, Artificial Intelligence, 201 (2013), 81-105.  doi: 10.1016/j.artint.2013.06.003. [2] S. Andrews, I. Tsochantaridis and T. Hofmann, Support vector machines for multiple-instance learning, Advances in Neural Information Processing Systems, 15 (2002), 561-568. [3] W.B. Cavnar and J.M. Trenkle, N-gram-based text categorization, Ann Arbor MI, 48113 (1994), 161-175. [4] Y. Chevaleyre and J. D. Zucker, Solving multiple-instance and multiple-part learning problems with decision trees and rule sets. application to the mutagenesis problem, In Biennial Conference of the Canadian Society on Computational Studies of Intelligence: Advances in Artificial Intelligence, (2001), 204–214. doi: 10.1007/3-540-45153-6_20. [5] T.G. Dietterich, R.H. Lathrop and T. Lozano-Pérez, Solving the multiple instance problem with axis-parallel rectangles, Artificial Intelligence, 89 (1997), 31-71.  doi: 10.1016/S0004-3702(96)00034-3. [6] S. Dumais, Using svms for text categorization, IEEE Expert, 13 (1998), 21-23. [7] N. Ishii, T. Murai, T. Yamada and Y. Bao, Text classification by combining grouping, lsa and knn, In Ieee/acis International Conference on Computer and Information Science and Ieee/acis International Workshop on Component-Based Software Engineering, software Architecture and Reuse, (2006), 148–154. doi: 10.1109/ICIS-COMSAR.2006.81. [8] Q. Kuang and X. Xu, Improvement and application of tfidf method based on text classification, International Conference on Internet Technology and Applications, (2010), 1-4. [9] S. Lai, L. Xu, K. Liu and J. Zhao, Recurrent convolutional neural networks for text classification, AAAI, (2015), 2267-2273. [10] O. Maron and T. Lozano-Pérez, A framework for multiple-instance learning, Advances in Neural Information Processing Systems, 200 (1998), 570-576. [11] A. Mccallum and K. Nigam, A comparison of event models for naive bayes text classification, In AAAI-98 Workshop On Learning For Text Categorization, 62 (2009), 41-48. [12] T. Mikolov, K. Chen, G. Corrado and J. Dean, Efficient estimation of word representations in vector space, Computer Science, 2013. [13] T. Mikolov, I. Sutskever, K. Chen, G. Corrado and J. Dean, Distributed representations of words and phrases and their compositionality, Advances in Neural Information Processing Systems, 26 (2013), 3111-3119. [14] J. Wang and J.D. Zucker, Solving multiple-instance problem: A lazy learning approach, Proc.international Conf.on Machine Learning, (2000), 1119-1126. [15] M.L. Zhang and Z.H. Zhou, Improve multi-instance neural networks through feature selection, Neural Processing Letters, 19 (2004), 1-10.  doi: 10.1023/B:NEPL.0000016836.03614.9f. [16] Z. H. Zhou and M. L. Zhang, Neural networks for multi-instance learning, In International Conference on Intelligent Information Technology 2002.

show all references

##### References:
 [1] J. Amores, Multiple instance classification: Review, taxonomy and comparative study, Artificial Intelligence, 201 (2013), 81-105.  doi: 10.1016/j.artint.2013.06.003. [2] S. Andrews, I. Tsochantaridis and T. Hofmann, Support vector machines for multiple-instance learning, Advances in Neural Information Processing Systems, 15 (2002), 561-568. [3] W.B. Cavnar and J.M. Trenkle, N-gram-based text categorization, Ann Arbor MI, 48113 (1994), 161-175. [4] Y. Chevaleyre and J. D. Zucker, Solving multiple-instance and multiple-part learning problems with decision trees and rule sets. application to the mutagenesis problem, In Biennial Conference of the Canadian Society on Computational Studies of Intelligence: Advances in Artificial Intelligence, (2001), 204–214. doi: 10.1007/3-540-45153-6_20. [5] T.G. Dietterich, R.H. Lathrop and T. Lozano-Pérez, Solving the multiple instance problem with axis-parallel rectangles, Artificial Intelligence, 89 (1997), 31-71.  doi: 10.1016/S0004-3702(96)00034-3. [6] S. Dumais, Using svms for text categorization, IEEE Expert, 13 (1998), 21-23. [7] N. Ishii, T. Murai, T. Yamada and Y. Bao, Text classification by combining grouping, lsa and knn, In Ieee/acis International Conference on Computer and Information Science and Ieee/acis International Workshop on Component-Based Software Engineering, software Architecture and Reuse, (2006), 148–154. doi: 10.1109/ICIS-COMSAR.2006.81. [8] Q. Kuang and X. Xu, Improvement and application of tfidf method based on text classification, International Conference on Internet Technology and Applications, (2010), 1-4. [9] S. Lai, L. Xu, K. Liu and J. Zhao, Recurrent convolutional neural networks for text classification, AAAI, (2015), 2267-2273. [10] O. Maron and T. Lozano-Pérez, A framework for multiple-instance learning, Advances in Neural Information Processing Systems, 200 (1998), 570-576. [11] A. Mccallum and K. Nigam, A comparison of event models for naive bayes text classification, In AAAI-98 Workshop On Learning For Text Categorization, 62 (2009), 41-48. [12] T. Mikolov, K. Chen, G. Corrado and J. Dean, Efficient estimation of word representations in vector space, Computer Science, 2013. [13] T. Mikolov, I. Sutskever, K. Chen, G. Corrado and J. Dean, Distributed representations of words and phrases and their compositionality, Advances in Neural Information Processing Systems, 26 (2013), 3111-3119. [14] J. Wang and J.D. Zucker, Solving multiple-instance problem: A lazy learning approach, Proc.international Conf.on Machine Learning, (2000), 1119-1126. [15] M.L. Zhang and Z.H. Zhou, Improve multi-instance neural networks through feature selection, Neural Processing Letters, 19 (2004), 1-10.  doi: 10.1023/B:NEPL.0000016836.03614.9f. [16] Z. H. Zhou and M. L. Zhang, Neural networks for multi-instance learning, In International Conference on Intelligent Information Technology 2002.
The structure of Bag-of-Words and Skip-Gram
Pseudo-code for mi-SVM
Results of experiments on sougouC
 Model car finance IT health sport SVM + TF-IDF 0.8473 0.8420 0.8363 0.8326 0.8737 SVM + Word2vec 0.9303 0.8571 0.8755 0.9163 0.9828 mi-SVM + Word2vec 0.9599 0.8904 0.8943 0.9325 0.9842
 Model car finance IT health sport SVM + TF-IDF 0.8473 0.8420 0.8363 0.8326 0.8737 SVM + Word2vec 0.9303 0.8571 0.8755 0.9163 0.9828 mi-SVM + Word2vec 0.9599 0.8904 0.8943 0.9325 0.9842
Results of experiments on 20newsgroup
 Model SVM+tf-idf SVM+Word2vec mi-SVM+Word2vec Average 0.8508 0.8421 0.8619
 Model SVM+tf-idf SVM+Word2vec mi-SVM+Word2vec Average 0.8508 0.8421 0.8619
 [1] Cheng Zheng. Sparse equidistribution of unipotent orbits in finite-volume quotients of $\text{PSL}(2,\mathbb R)$. Journal of Modern Dynamics, 2016, 10: 1-21. doi: 10.3934/jmd.2016.10.1 [2] Clark Butler, Kiho Park. Thermodynamic formalism of $\text{GL}_2(\mathbb{R})$-cocycles with canonical holonomies. Discrete and Continuous Dynamical Systems, 2021, 41 (5) : 2141-2166. doi: 10.3934/dcds.2020356 [3] Zhongjie Liu, Duanzhi Zhang. Brake orbits on compact symmetric dynamically convex reversible hypersurfaces on $\mathbb{R}^\text{2n}$. Discrete and Continuous Dynamical Systems, 2019, 39 (7) : 4187-4206. doi: 10.3934/dcds.2019169 [4] J. Kent Poots, Nick Cercone. First steps in the investigation of automated text annotation with pictures. Big Data & Information Analytics, 2017, 2 (2) : 97-106. doi: 10.3934/bdia.2017001 [5] Luigi C. Berselli, Placido Longo. Classical solutions for the system $\bf {\text{curl}\, v = g}$, with vanishing Dirichlet boundary conditions. Discrete and Continuous Dynamical Systems - S, 2019, 12 (2) : 215-229. doi: 10.3934/dcdss.2019015 [6] David Hoff. Pointwise bounds for the Green's function for the Neumann-Laplace operator in $\text{R}^3$. Kinetic and Related Models, 2022, 15 (4) : 535-550. doi: 10.3934/krm.2021037 [7] Editorial Office. Retraction: Xiaohong Zhu, Lihe Zhou, Zili Yang and Joyati Debnath, A new text information extraction algorithm of video image under multimedia environment. Discrete and Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1265-1265. doi: 10.3934/dcdss.2019087 [8] Qing Ma, Yanjun Wang. Distributionally robust chance constrained svm model with $\ell_2$-Wasserstein distance. Journal of Industrial and Management Optimization, 2021  doi: 10.3934/jimo.2021212 [9] Prashant Shekhar, Abani Patra. Hierarchical approximations for data reduction and learning at multiple scales. Foundations of Data Science, 2020, 2 (2) : 123-154. doi: 10.3934/fods.2020008 [10] Wei Xue, Wensheng Zhang, Gaohang Yu. Least absolute deviations learning of multiple tasks. Journal of Industrial and Management Optimization, 2018, 14 (2) : 719-729. doi: 10.3934/jimo.2017071 [11] Changming Song, Yun Wang. Nonlocal latent low rank sparse representation for single image super resolution via self-similarity learning. Inverse Problems and Imaging, 2021, 15 (6) : 1347-1362. doi: 10.3934/ipi.2021017 [12] Stefan Erickson, Michael J. Jacobson, Jr., Andreas Stein. Explicit formulas for real hyperelliptic curves of genus 2 in affine representation. Advances in Mathematics of Communications, 2011, 5 (4) : 623-666. doi: 10.3934/amc.2011.5.623 [13] Carlos Castillo-Garsow. The role of multiple modeling perspectives in students' learning of exponential growth. Mathematical Biosciences & Engineering, 2013, 10 (5&6) : 1437-1453. doi: 10.3934/mbe.2013.10.1437 [14] Nikolaos S. Papageorgiou, Calogero Vetro, Francesca Vetro. Multiple solutions for (p, 2)-equations at resonance. Discrete and Continuous Dynamical Systems - S, 2019, 12 (2) : 347-374. doi: 10.3934/dcdss.2019024 [15] Yuan Xu, Xin Jin, Saiwei Wang, Yang Tang. Optimal synchronization control of multiple euler-lagrange systems via event-triggered reinforcement learning. Discrete and Continuous Dynamical Systems - S, 2021, 14 (4) : 1495-1518. doi: 10.3934/dcdss.2020377 [16] A. Alamo, J. M. Sanz-Serna. Word combinatorics for stochastic differential equations: Splitting integrators. Communications on Pure and Applied Analysis, 2019, 18 (4) : 2163-2195. doi: 10.3934/cpaa.2019097 [17] Ronnie Pavlov, Pascal Vanier. The relationship between word complexity and computational complexity in subshifts. Discrete and Continuous Dynamical Systems, 2021, 41 (4) : 1627-1648. doi: 10.3934/dcds.2020334 [18] Chuandong Li, Fali Ma, Tingwen Huang. 2-D analysis based iterative learning control for linear discrete-time systems with time delay. Journal of Industrial and Management Optimization, 2011, 7 (1) : 175-181. doi: 10.3934/jimo.2011.7.175 [19] Xiaoming Yan, Ping Cao, Minghui Zhang, Ke Liu. The optimal production and sales policy for a new product with negative word-of-mouth. Journal of Industrial and Management Optimization, 2011, 7 (1) : 117-137. doi: 10.3934/jimo.2011.7.117 [20] José Gómez-Torrecillas, F. J. Lobillo, Gabriel Navarro. Convolutional codes with a matrix-algebra word-ambient. Advances in Mathematics of Communications, 2016, 10 (1) : 29-43. doi: 10.3934/amc.2016.10.29

Impact Factor: