An Enhancement of Long-Short Term Memory for the Implementation of Virtual Assistant for Pamantasan ng Lungsod ng Maynila Students

  • Draillim Xaviery F. Valonzo Computer Science Department, Pamantasan ng Lungsod ng Maynila, Philippines
  • Jose Ramon M. Jasa Computer Science Department, Pamantasan ng Lungsod ng Maynila, Philippines
  • Mark Christopher R. Blanco Computer Science Department, Pamantasan ng Lungsod ng Maynila, Philippines
  • Khatalyn E. Mata Computer Science Department, Pamantasan ng Lungsod ng Maynila, Philippines
  • Dan Michael A. Cortez Computer Science Department, Pamantasan ng Lungsod ng Maynila, Philippines

Abstract

Purpose – Natural Language Processing is an aspect of Artificial Intelligence that focuses on how technology can understand words, derive meaning from them, and return a meaningful and correct output. Therefore, it is used in the making of Virtual Assistants today. Training virtual assistants require long temporal dependencies and sequence-to-sequence classification. This study will be used to create a possible algorithm that will enhance the performance of a possible virtual assistant designed for PLM students and faculty members.

Method – LSTM will be used to train the model to address these concerns. However. The LSTM algorithm faces the problem of slow computing speed and high computation costs. To address this the researchers implemented TensorFlow XLA to the model to optimize the computation costs in the problem.

Results – Though the number of matrices exploded from 934 to 30000, the training can show slight improvement both in memory, CPU (Central Processing Unit) utilization, and time reduction. At 50 epochs, training the model with XLA has shown a time decrease of 8 minutes and can save at most 500 megabytes of memory.

Conclusion – XLA has proven that it has helped the LSTM algorithm in terms of its usage in memory, utilization of CPU, and overall speed of training, especially in longer processes.  

Recommendations – The researchers recommend using XLA in the context of pruning and the effect of pruning paired with XLA to maximize the performance of the model.

Practical Implication – This would allow a much more efficient and cost-friendly training of the model when feeding it new data to be used for virtual assistant designed for PLM students and faculty members.

Published
2022-01-23
How to Cite
VALONZO, Draillim Xaviery F. et al. An Enhancement of Long-Short Term Memory for the Implementation of Virtual Assistant for Pamantasan ng Lungsod ng Maynila Students. International Journal of Computing Sciences Research, [S.l.], v. 6, jan. 2022. ISSN 2546-115X. Available at: <//stepacademic.net/ijcsr/article/view/298>. Date accessed: 01 oct. 2022.
Section
Accepted Version