[1] |
E. Brill, S. Dumais, and M. Banko. An analysis of the AskMSR question-answering system. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002), page 257–264. ACM, 2002. [ bib | DOI ] |
[2] |
M. A. C. Soares and F. S. Parreiras. A literature review on question answering techniques, paradigms and systems. Journal of King Saud University - Computer and Information Sciences, 32(6), 2020. [ bib | DOI ] |
[3] |
J. Pérez, M. Arenas, and C. Gutierrez. Semantics and Complexity of SPARQL. In International semantic web conference, pages 30--43. Springer, 2006. [ bib | DOI ] |
[4] |
J. W. F. da Silva, A. D. P. Venceslau, J. E. Sales, J. G. R. Maia, V. C. M. Pinheiro, and V. M. P. Vidal. A short survey on end-to-end simple question answering systems. Artificial Intelligence Review, 53(7):5429–5453, 2020. [ bib | DOI ] |
[5] |
A. Echihabi and D. Marcu. A Noisy-Channel Approach to Question Answering. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, page 16–23. ACM, 2003. [ bib | DOI ] |
[6] |
R. Sequiera, G. Baruah, Z. Tu, S. Mohammed, J. Rao, H. Zhang, and J. Lin. Exploring the effectiveness of convolutional neural networks for answer selection in end-to-end question answering. arXiv preprint arXiv:1707.07804, 2017. [ bib | DOI ] |
[7] |
A. Mishra and S. K. Jain. A survey on question answering systems with classification. Journal of King Saud University-Computer and Information Sciences, 28(3):345--361, 2016. [ bib | DOI ] |
[8] |
S. W. Yih, M. Chang, C. Meek, and A. Pastusiak. Question Answering Using Enhanced Lexical Semantic Models. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. ACL, 2013. [ bib | DOI ] |
[9] |
S. Yoon, F. Dernoncourt, D. S. Kim, T. Bui, and K. Jung. A Compare-Aggregate Model with Latent Clustering for Answer Selection. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, page 2093–2096. ACM, 2019. [ bib | DOI ] |
[10] |
H. T. Madabushi, M. Lee, and J. Barnden. Integrating Question Classification and Deep Learning for improved Answer Selection. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3283--3294, 2018. [ bib | DOI ] |
[11] |
J. Devlin, M. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171--4186. Association for Computational Linguistics, 2019. [ bib | DOI ] |
[12] |
M. Wang, N. A. Smith, and T. Mitamura. What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 22--32, 2007. [ bib | DOI ] |
[13] |
Y. Yang, W. Yih, and C. Meek. WikiQA: A Challenge Dataset for Open-Domain Question Answering. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2013--2018, 2015. [ bib | DOI ] |
[14] |
S. Wan, M. Dras, R. Dale, and C. Paris. Using Dependency-Based Features to Take the 'Para-farce' out of Paraphrase. In Proceedings of the Australasian Language Technology Workshop 2006, pages 131--138, 2006. [ bib | DOI ] |
[15] |
M. Surdeanu, M. Ciaramita, and H. Zaragoza. Learning to Rank Answers on Large Online QA Collections. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, pages 719--727, 2008. [ bib | DOI ] |
[16] |
J. Mozafari, M. A. Nematbakhsh, and A. Fatemi. Attention-based pairwise multi-perspective convolutional neural network for answer selection in question answering. arXiv preprint arXiv:1909.01059, 2019. [ bib | DOI ] |
[17] |
P. Oram. Wordnet: An electronic lexical database. christiane fellbaum (ed.). cambridge, ma: Mit press, 1998. pp. 423. Applied Psycholinguistics, 22(1):131--134, 2001. [ bib | DOI ] |
[18] |
Z. Tu. An experimental analysis of multi-perspective convolutional neural networks. M.S thesis, University of Waterloo, 2018. [ bib ] |
[19] |
P. Vasin, D. Roth, and W. Yih. Mapping Dependencies Trees: An Application to Question Answering. In In Proceedings of the 8th International Symposium on Artificial Intelligence and Mathematics, Fort, 2004. [ bib | DOI ] |
[20] |
M. Heilman and N. A. Smith. Tree Edit Models for Recognizing Textual Entailments, Paraphrases, and Answers to Questions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 1011--1019, 2010. [ bib | DOI ] |
[21] |
M. Wang and C. D. Manning. Probabilistic Tree-Edit Models with Structured Latent Variables for Textual Entailment and Question Answering. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 1164--1172, 2010. [ bib | DOI ] |
[22] |
X. Yao, B. Van Durme, C. Callison-Burch, and P. Clark. Answer Extraction as Sequence Tagging with Tree Edit Distance. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 858--867, 2013. [ bib | DOI ] |
[23] |
A. Severyn and A. Moschitti. Automatic Feature Engineering for Answer Selection and Extraction. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 458--467, 2013. [ bib | DOI ] |
[24] |
J. Bromley, J. W. Bentz, L. Bottou, I. Guyon, Y. LeCun, C. Moore, E. Säckinger, and R. Shah. Signature verification using a “siamese” time delay neural network. International Journal of Pattern Recognition and Artificial Intelligence, 7(4):669--688, 1993. [ bib | DOI ] |
[25] |
T. Lai, T. Bui, and S. Li. A Review on Deep Learning Techniques Applied to Answer Selection. In Proceedings of the 27th international conference on computational linguistics, pages 2132--2144, 2018. [ bib | DOI ] |
[26] |
L. Yu, K. M. Hermann, P. Blunsom, and S. Pulman. Deep Learning for Answer Sentence Selection. arXiv preprint arXiv:1412.1632, 2014. [ bib | DOI ] |
[27] |
M. Feng, B. Xiang, M. R. Glass, L. Wang, and B. Zhou. Applying deep learning to answer selection: A study and an open task. In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pages 813--820. IEEE, 2015. [ bib | DOI ] |
[28] |
H. He, K. Gimpel, and J. Lin. Multi-Perspective Sentence Similarity Modeling with Convolutional Neural Networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1576--1586, 2015. [ bib | DOI ] |
[29] |
J. Rao, H. He, and J. Lin. Noise-Contrastive Estimation for Answer Selection with Deep Neural Networks. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, page 1913–1916. ACM, 2016. [ bib | DOI ] |
[30] |
S. Kamath, B. Grau, and Y. Ma. Predicting and Integrating Expected Answer Types into a Simple Recurrent Neural Network Model for Answer Sentence Selection. Computación y Sistemas, 23(3), 2019. [ bib | DOI ] |
[31] |
L. Yang, Q. Ai, J. Guo, and W. B. Croft. ANMM: Ranking Short Answer Texts with Attention-Based Neural Matching Model. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pages 287--296. ACM, 2016. [ bib | DOI ] |
[32] |
Z. Niu, G. Zhong, and H. Yu. A review on the attention mechanism of deep learning. Neurocomputing, 452:48--62, 2021. [ bib | DOI ] |
[33] |
H. He, J. Wieting, K. Gimpel, J. Rao, and J. Lin. UMD-TTIC-UW at SemEval-2016 Task 1: Attention-Based Multi-Perspective Convolutional Neural Networks for Textual Similarity Measurement. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1103--1108, 2016. [ bib | DOI ] |
[34] |
S. Wang and J. Jiang. A Compare-Aggregate Model for Matching Text Sequences. arXiv preprint arXiv:1611.01747, 2016. [ bib | DOI ] |
[35] |
H. He and J. Lin. Pairwise Word Interaction Modeling with Deep Neural Networks for Semantic Similarity Measurement. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 937--948, 2016. [ bib | DOI ] |
[36] |
Z. Wang, W. Hamza, and R. Florian. Bilateral Multi-Perspective Matching for Natural Language Sentences. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, pages 4144--4150. ACM, 2017. [ bib | DOI ] |
[37] |
W. Bian, S. Li, Z. Yang, G. Chen, and Z. Lin. A Compare-Aggregate Model with Dynamic-Clip Attention for Answer Selection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, page 1987–1990. ACM, 2017. [ bib | DOI ] |
[38] |
G. Shen, Y. Yang, and Z. Deng. Inter-Weighted Alignment Network for Sentence Pair Modeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2017. [ bib | DOI ] |
[39] |
Q. H. Tran, T. Lai, G. Haffari, I. Zukerman, T. Bui, and H. Bui. The Context-Dependent Additive Recurrent Neural Net. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1274--1283, 2018. [ bib | DOI ] |
[40] |
K. Lee, O. Levy, and L. Zettlemoyer. Recurrent Additive Networks. arXiv preprint arXiv:1705.07393, 2017. [ bib | DOI ] |
[41] |
J. Howard and S. Ruder. Universal Language Model Fine-tuning for Text Classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 328--339. Association for Computational Linguistics, 2018. [ bib | DOI ] |
[42] |
M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2018. [ bib | DOI ] |
[43] |
Md. T. R. Laskar, X. Huang, and E. Hoque. Contextualized Embeddings based Transformer Encoder for Sentence Similarity Modeling in Answer Selection Task. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 5505--5514, 2020. [ bib | DOI ] |
[44] |
J. Mozafari, A. Fatemi, and P. Moradi. A Method For Answer Selection Using DistilBERT And Important Words. In 2020 6th International Conference on Web Research (ICWR), pages 72--76. IEEE, 2020. [ bib | DOI ] |
[45] |
V. Sanh, L. Debut, J. Chaumond, and T. Wolf. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2019. [ bib | DOI ] |
[46] |
Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692, 2019. [ bib | DOI ] |
[47] |
O. Shonibare. ASBERT: Siamese and Triplet network embedding for open question answering. arXiv preprint arXiv:2104.08558, 2021. [ bib | DOI ] |
[48] |
Z. Wang, H. M., and A. Ittycheriah. Sentence Similarity Learning by Lexical Decomposition and Composition. In COLING 2016, 26th International Conference on Computational Linguistics, pages 1340--1349. ACL, 2016. [ bib | DOI ] |
[49] |
Y. Shen, Y. Deng, M. Yang, Y. Li, N. Du, W. Fan, and K. Lei. Knowledge-Aware Attentive Neural Network for Ranking Question Answer Pairs. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, page 901–904. ACM, 2018. [ bib | DOI ] |
[50] |
R. Yang, J. Zhang, X. Gao, F. Ji, and H. Chen. Simple and Effective Text Matching with Richer Alignment Features. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4699--4709. Association for Computational Linguistics, 2019. [ bib | DOI ] |
[51] |
R. Han, L. Soldaini, and A. Moschitti. Modeling Context in Answer Sentence Selection Systems on a Latency Budget. arXiv preprint arXiv:2101.12093, 2021. [ bib | DOI ] |
[52] |
C. Subakan, M. Ravanelli, S. Cornell, M. Bronzi, and J. Zhong. Attention Is All You Need In Speech Separation. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 21--25. IEEE, 2021. [ bib | DOI ] |
[53] |
I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Information Processing Systems, pages 3104--3112, 2014. [ bib | DOI ] |
[54] |
V. Nair and G. E. Hinton. Rectified Linear Units Improve Restricted Boltzmann Machines. In the 27th International Conference on International Conference on Machine Learning, page 807–814, 2010. [ bib | DOI ] |
[55] |
N. Buduma and N. Locascio. Fundamentals of deep learning: Designing next-generation machine intelligence algorithms. O'Reilly Media, Inc., 2017. [ bib ] |
[56] |
V. Subramanian. Deep Learning with PyTorch: A practical approach to building neural network models using PyTorch. Packt Publishing Ltd, 2018. [ bib ] |
[57] |
D. Hendrycks and K. Gimpel. Bridging Nonlinearities and Stochastic Regularizers with Gaussian Error Linear Units. MMMMM, 2016. [ bib | DOI ] |
[58] |
I. Loshchilov and F. Hutter. Decoupled Weight Decay Regularization. arXiv preprint arXiv:1711.05101, 2019. [ bib | DOI ] |
[59] |
Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:1909.11942, 2019. [ bib | DOI ] |
[60] |
J. Guo, Y. Fan, L. Pang, L. Yang, Q. Ai, H. Zamani, C. Wu, W. B. Croft, and X. Cheng. A deep look into neural ranking models for information retrieval. Information Processing & Management, 57(6), 2020. [ bib | DOI ] |
[61] |
C. Subakan, M. Ravanelli, S. Cornell, M. Bronzi, and J. Zhong. Attention Is All You Need In Speech Separation. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 21--25. IEEE, 2021. [ bib | DOI ] |
[62] |
J. Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85--117, 2015. [ bib | DOI ] |
[63] |
S. Minaee, N. Kalchbrenner, E. Cambria, N. Nikzad, M. Chenaghlu, and J. Gao. Deep Learning--Based Text Classification: A Comprehensive Review. ACM Computing Surveys (CSUR), 54(3):1–40, 2021. [ bib | DOI ] |
|