Please use this identifier to cite or link to this item:
https://open.uns.ac.rs/handle/123456789/756
Title: | Speech technology progress based on new machine learning paradigm | Authors: | Delić, Vlado Perić Z. Sečujski, Milan Jakovljević, Nikša Nikolić, Jelena Mišković, Dragiša Simić, Nikola Suzić, Siniša Delić, Tijana |
Issue Date: | 1-Jan-2019 | Journal: | Computational Intelligence and Neuroscience | Abstract: | © 2019 Vlado Delić et al. Speech technologies have been developed for decades as a typical signal processing area, while the last decade has brought a huge progress based on new machine learning paradigms. Owing not only to their intrinsic complexity but also to their relation with cognitive sciences, speech technologies are now viewed as a prime example of interdisciplinary knowledge area. This review article on speech signal analysis and processing, corresponding machine learning algorithms, and applied computational intelligence aims to give an insight into several fields, covering speech production and auditory perception, cognitive aspects of speech communication and language understanding, both speech recognition and text-to-speech synthesis in more details, and consequently the main directions in development of spoken dialogue systems. Additionally, the article discusses the concepts and recent advances in speech signal compression, coding, and transmission, including cognitive speech coding. To conclude, the main intention of this article is to highlight recent achievements and challenges based on new machine learning paradigms that, over the last decade, had an immense impact in the field of speech signal processing. | URI: | https://open.uns.ac.rs/handle/123456789/756 | ISSN: | 16875265 | DOI: | 10.1155/2019/4368036 |
Appears in Collections: | FTN Publikacije/Publications |
Show full item record
SCOPUSTM
Citations
46
checked on May 10, 2024
Page view(s)
42
Last Week
14
14
Last month
10
10
checked on May 10, 2024
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.