##plugins.themes.academic_pro.article.main##

Abstract

This article focuses on the development of deep learning models and algorithms specifically designed for Uzbek language processing within the IT field. A comprehensive approach involving data collection, preprocessing, model selection, and evaluation was employed. Experiments with RNN, LSTM, and transformer-based models like BERT and GPT were conducted, with transformer models yielding superior results. Key challenges included limited datasets and the complex morphological structure of Uzbek. The findings suggest that fine-tuned transformer models, especially with language-specific preprocessing, can significantly improve performance in language understanding tasks for low-resource languages

Keywords

Deep learning natural language processing uzbek language

##plugins.themes.academic_pro.article.details##

How to Cite
Suyunova Zamira, & Erkinova Dilnoza. (2025). Development Of Deep Learning Models And Algorithms For Language Processing In Uzbek. Texas Journal of Engineering and Technology, 40, 1–4. Retrieved from https://zienjournals.com/index.php/tjet/article/view/5873

References

  1. Ahmadaliyev, S. (2020). Uzbek Language Processing: Challenges and Advances.
  2. Karimov, N. (2018). Morphological Complexity of the Uzbek Language.
  3. Rakhmonov, I. (2019). Neural Networks for Low-Resource Languages: The Case of Uzbek.
  4. Tursunov, D. (2017). Tokenization Strategies for Agglutinative Languages.
  5. Muminov, A. (2021). Machine Translation and Deep Learning for Uzbek Language Processing.
  6. Vaswani, A., et al. (2017). Attention Is All You Need.
  7. Devlin, J., et al. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language
  8. Understanding.
  9. Bojanowski, P., et al. (2017). Enriching Word Vectors with Subword Information.
  10. Tiedemann, J. (2012). Parallel Data, Tools and Interfaces in OPUS.
  11. Cho, K., et al. (2014). Learning Phrase Representations using RNN Encoder-Decoder for Statistical
  12. Machine Translation.
  13. Mikolov, T., et al. (2013). Efficient Estimation of Word Representations in Vector Space.
  14. Peters, M. E., et al. (2018). Deep Contextualized Word Representations.