Please use this identifier to cite or link to this item: http://studentrepo.iium.edu.my/handle/123456789/11168
Title: Modeling for recognizing recitation type of the Holy Qur'an
Authors: Yousfi, Bilal
Supervisor: Akram M Z M Khedher, Ph.D
Subject: Automatic speech recognition
metadata.dc.subject.icsi: Qur’an -- Qira’at -- Dialects
Qur’an -- Tilawah
Year: 2021
Publisher: Kuala Lumpur : Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, 2021
Abstract in English: Computer science and speech recognition have enjoyed a long and fruitful relationship for decades. Speech recognition has been beneficial for capturing and producing an accurate transcription of spoken words. In computer science, a prime challenge is to interpret these signals into meaningful data and to develop algorithms and applications to establish an interface between the human’s voice signal and computer. Moreover, the major concerns of automatic speech recognition (ASR) are determining a set of classification features and finding a suitable recognition model for these features. Hidden Markov Models (HMMs) have been demonstrated to be powerful models for representing time-varying signals. The act of reading Qur’ān and pronouncing its sound dwells on the type of recitation. These are referring to the recitation of Warsh or the recitation of Hafss. According to the science of Qira’āt, it is essential to recognize the type of recitations, especially with the diversity and the spread of Qira’āt in the world. There are numerous efforts made by previous systems on the development of feasible guiding techniques to the act of reading the Holy Qur’ān (Tajweed rules). Unfortunately, liking the major control variables of the practices of both Usūl al Qira’āh (general principles) and Farsh al-huruf (specific variants) in those approaches were neglected. In order to fill this gap, this research thesis attempts to design and fabricate a speech recognition system that distinguishes the types of recitations (Qira’āt of Hafss An Assim and the Qira’āt of Warsh An Naafi’) while reciting the Qur’ān. The proposed system is capable of recognizing, identifying, pointing out the mismatch and discriminate between two types recitations for Hafss and Warsh. An experiment among user’s recitation for Hafss and Warsh with the recitation made by the expert Qur’ān reader stored in a database has been done. This thesis investigates acoustic models based on the Hidden Markov Models (HMM) classifier together with clustering algorithm for Qur’ān Speech Recognition. A significant improvement on the recognition performance was achieved when the HMM-clustering model was implemented compared to the baseline model’s (single HMM model (conventional MFCC)) result. The results show that the proposed model has a faster ability for recognizing phonemes sequences than the (conventional MFCC model. Model. The adoption of the k-means algorithm for acoustic modeling is seen to be a more valid model for acoustic modeling speech recognition. However, our developed system shows a lower performance in some instances when it was compared to the other systems recently reported in the literature that used the same data. This due to the small size of training dataset used for this research, hardware availability and noise from the environment and from the speakers which can affect the improvement the results in this thesis as our aim is to investigate the proposed models for speech recognition and to make a direct comparison between these models.
Call Number: t TK 7895 S65 Y82M 2021
Kullliyah: Kulliyyah of Information and Communication Technology
Programme: Doctor of Philosophy in Computer Science
URI: http://studentrepo.iium.edu.my/handle/123456789/11168
Appears in Collections:KICT Thesis

Files in This Item:
File Description SizeFormat 
t11100327835BilalYousfi_24.pdf24 pages file584.07 kBAdobe PDFView/Open
t11100327835BilalYousfi_SEC.pdf
  Restricted Access
Full text secured file2.59 MBAdobe PDFView/Open    Request a copy
Show full item record

Google ScholarTM

Check


Items in this repository are protected by copyright, with all rights reserved, unless otherwise indicated. Please give due acknowledgement and credits to the original authors and IIUM where applicable. No items shall be used for commercialization purposes except with written consent from the author.