- Speech dispatcher linux ��� ���
- Использование RHVoice
- Использование festival
- Speech-dispatcher
- Contents
- Features And Usability [ edit ]
- Configuration [ edit ]
- Custom «modules» [ edit ]
- Speech dispatcher linux ��� ���
- speech-dispatcher
- manual page for speech-dispatcher 0.9.1
- SYNOPSIS
- DESCRIPTION
- OPTIONS
- COPYRIGHT
- SEE ALSO
- Speech dispatcher linux ��� ���
Speech dispatcher linux ��� ���
Это небольшая инструкция по установке и настройке двух популярных синтезаторов речи RHVoice и Festival для совместной работы с речевым сервером Speech Dispatcher, используемом во многих дистрибутивах, в том числе Ubuntu и Linux Mint.
Использование RHVoice
Это пример установки синтезатора речи RHVoice в дистрибутиве Linux Mint (подходит и для Ubuntu). RHVoice — это многоязычный синтезатор речи с поддержкой русского, английского, украинского, эсперанто и других языков. Чтобы использовать совместно речевой сервер Speech Dispatcher и голоса RHVoice необходимо собрать синтезатор из исходников, установить его в системе и отредактировать конфигурационные файлы.
Прежде всего, необходимо создать каталог, в который будут загружены исходники синтезатора и в котором будет осуществляться его сборка. Предполагается, что пользователь находится в домашнем каталоге.
Теперь следует установить пакеты, необходимые для сборки синтезатора:
Затем нужно загрузить исходники из официального репозитария на github:
В процессе загрузки автоматически будет создан каталог RHVoice, где и будут размещены загруженный файлы. Необходимо перейти в этот каталог и запустить процесс сборки:
Начнётся процесс сборки, который займёт некоторое время. После его успешного завершения следует выполнить установку собранного синтезатора:
Чтобы сконфигурировать речевой сервер Speech Dispatcher для работы совместно с RHVoice, необходимо создать (если отсутствует) и отредактировать конфигурационный файл соответствующего модуля, для этого можно использовать текстовый редактор nano или gedit:
В итоге, содержимое этого файла должно быть следующим:
Чтобы завершить работу в редакторе namo, следует нажать комбинацию Ctrl + X и подтвердить сохранение изменённого файла, нажав букву Y, а затем — дважды клавишу Enter.
Помимо этого, можно ещё отредактировать основной конфигурационный файл Speech Dispatcher, но в современных версиях речевого сервера этот шаг не является обязательным, поскольку Speech Dispatcher автоматически пытается загрузить все синтезаторы, для которых в системе установлены модули.
В этом файле необходимо найти блок команд добавления модулей синтезаторов. Команды начинаются с ключевого слова AddModule (команда может быть закомментирована, то есть иметь в начале строки символ решётки (#)). Перед первой командой AddModule следует добавить строку:
Остаётся лишь перезапустить сервер Speech Dispatcher, чтобы сделанные изменения вступили в силу:
Использование festival
Чтобы использовать речевой сервер Speech Dispatcher совместно с синтезаторами Festival, необходимо установить некоторые пакеты и внести изменения в конфигурационные файлы речевого сервера.
Установка пакетов выглядит следующим образом:
Чтобы сконфигурировать речевой сервер, можно отредактировать файл speechd.conf в каталоге /etc/speech-dispatcher/ , то есть открыть файл в текстовом редакторе:
И раскомментировать (или добавить, если отсутствует) строку:
Вышеозначенная строка не является обязательной в современных версиях Speech Dispatcher, поскольку речевой сервер при старте автоматически пытается подключить все синтезаторы, для которых установлены модули.
Если необходимо, чтобы Festival был синтезатором по умолчанию, то следует добавить строку:
Помимо этого, можно отредактировать файл festival.conf в каталоге /etc/speech-dispatcher/modules/ и изменить или раскомментировать следующие строки(если параметры по умолчанию устраивают, то ничего менять в этом файле не следует):
Теперь необходимо запустить сервер Festival, для чего следует в окне терминала набрать:
Для тестирования можно использовать команду:
Необходимо перезапустить речевой сервер Speech Dispatcher, чтобы сделанные изменения вступили в силу:
Источник
Speech-dispatcher
Original author(s) | Brailcom |
---|---|
Developer(s) | Samuel Thibault |
Initial release | 2002 ; 19 years ago ( 2002 ) |
Stable release | |
Repository | github.com /brailcom/speechd/ |
Operating system | SystemD/Linux |
Type | System dameon |
Documentation | Speech Dispatcher manual (freebsoft.org/doc) Manpage: speech-dispatcher.1 |
Website | freebsoft.org/speechd |
Speech-dispatcher is a system daemon that allows programs to use one of the installed speech synthesizer programs to produce audio from text input as long as it has a special module or a configuration file for the speech synthesizer programs you want to use. It sits a a layer between programs that would like to turn text into speech and programs who actually do that.
Contents
Features And Usability [ edit ]
Speech dispatcher can’t be used for much on its own. It is meant to be called from programs like KMouth when they need text to speech functionality. You will generally not have to interact with it on your own. You may, from time to time, notice that it has magically appeared in the process list. That’s a result of some program asking it to provide text-to-speech functionality.
There is a separate package you can install called speech-dispatcher-utils which contains a tool called spd-say . That tool can me used to make your computer spd-say whatever in a terminal. That program is useful if you want to record some computer-generated statement or test a new speech-dispatcher configuration. It is not generally very useful.
Configuration [ edit ]
Speech-dispatcher can be configured using the configuration file /etc/speech-dispatcher/speechd.conf and «module» specific configuration files in /etc/speech-dispatcher/modules/ .
Speech-dispatcher supports the following free software text to speech solutions out-of-the-box:
It does come with additional modules for non-free text to speech software.
The text to speech program it uses is selected by the DefaultModule setting:
There is also a «generic» module available. This «generic» module can be used to create custom «modules» (=configuration files) for any text to speech software, like mimic, which is not supported by a speech-dispatcher C module.
Custom «modules» [ edit ]
Custom module configuration files need nothing more than a GenericExecuteSynth variable with a executable and a command line and a GenericCmdDependency option pointing to the binary.
All you need to make mimic work with speech-dispatcher is:
And a line in /etc/speech-dispatcher/speechd.conf that says:
You may want to make your custom module slightly more advanced. Generic module configuration files support choosing voices the underlying speech synthesis program supports. Making a module support voices is a matter of adding voices with AddVoice statements and passing a $VOICE variable to the speech engine.
The default voice is set in /etc/speech-dispatcher/speechd.conf using a DefaultVoiceType statement. Having a DefaultVoiceType statement in a module configuration file makes no difference.
Running the spd-say -L when those AddVoice statements are present makes it list the voices as available:
The voices spd-say know about can be used by using the -t argument and the variant name in lowcase:
Will pass awb on to mimic using the $VOICE variable.
You will want to use the $LANGUAGE variable if you make a speech-dispatcher module for some back-end with language-specific voices.
Источник
Speech dispatcher linux ��� ���
Common interface to speech synthesis
This is the Speech Dispatcher project (speech-dispatcher). It is a part of the Free(b)soft project, which is intended to allow blind and visually impaired people to work with computer and Internet based on free software.
Speech Dispatcher project provides a high-level device independent layer for access to speech synthesis through a simple, stable and well documented interface.
Complete documentation may be found in doc directory: the speech dispatcher documentation: doc/speech-dispatcher.html, the spd-say documentation: doc/spd-say.html, and the SSIP protocol documentation: doc/ssip.html.
Read doc/README for more information.
The key features and the supported TTS engines, output subsystems, client interfaces and client applications known to work with Speech Dispatcher are listed in overview of speech-dispatcher as well as voices settings and where to look at in case of a sound or speech issue.
There is a public mailing-list speechd-discuss for this project.
This list is for Speech Dispatcher developers, as well as for users. If you want to contribute the development, propose a new feature, get help or just be informed about the latest news, don’t hesitate to subscribe. The communication on this list is held in English.
Various versions of speech-dispatcher can be downloaded from the project archive.
Bug reports, issues, and patches can be submitted to the github tracker.
The source code is freely available. It is managed using Git. You can use the GitHub web interface or clone the repository from:
A Java library is currently developed separately. You can use the GitHub web interface or clone the repository from:
To build and install speech-dispatcher and all of it’s components, read the file INSTALL.
Speech Dispatcher is being developed in closed cooperation between the Brailcom company and external developers, both are equally important parts of the development team. The development team also accepts and processes contributions from other developers, for which we are always very thankful! See more details about our development model in Cooperation. Bellow find a list of current inner development team members and people who have contributed to Speech Dispatcher in the past:
- Samuel Thibault
- Jan Buchal
- Tomas Cerha
- Hynek Hanke
- Milan Zamazal
- Luke Yelavich
- C.M. Brannon
- William Hubbs
- Andrei Kholodnyi
Contributors: Trevor Saunders, Lukas Loehrer,Gary Cramblitt, Olivier Bert, Jacob Schmude, Steve Holmes, Gilles Casse, Rui Batista, Marco Skambraks . and many others.
Copyright (C) 2001-2009 Brailcom, o.p.s Copyright (C) 2018-2020 Samuel Thibault samuel.thibault@ens-lyon.org Copyright (C) 2018 Didier Spaier didier@slint.fr
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details (file COPYING in the root directory).
You should have received a copy of the GNU General Public License along with this program. If not, see https://www.gnu.org/licenses/.
The speech-dispatcher server (src/server/ + src/common/) contains GPLv2-or-later and LGPLv2.1-or-later source code, but is linked against libdotconf, which is LGPLv2.1-only at the time of writing.
The speech-dispatcher modules (src/modules/ + src/common/ + src/audio/) contain GPLv2-or-later, LGPLv2.1-or-later, and LGPLv2-or-later source code, but are also linked against libdotconf, which is LGPLv2.1-only at the time of writing.
The spd-conf tool (src/api/python/speechd_config/), spd-say tool (src/clients/say), and spdsend tool (src/clients/spdsend/) are GPLv2-or-later.
The C API library (src/api/c/) is LGPLv2.1-or-later
The Common Lisp API library (src/api/cl/) is LGPLv2.1-or-later.
The Guile API library (src/api/guile/) contains GPLv2-or-later and LGPLv2.1-or-later source code.
The Python API library (src/api/python/speechd/) is LGPLv2.1-or-later.
All tests in src/tests/ are GPLv2-or-later.
Источник
speech-dispatcher
manual page for speech-dispatcher 0.9.1
SYNOPSIS
speech-dispatcher [-
DESCRIPTION
Speech Dispatcher — Common interface for Speech Synthesis (GNU GPL)
OPTIONS
-d, —run-daemon Run as a daemon -s, —run-single Run as single application -a, —spawn Start only if autospawn is not disabled -l, —log-level Set log level (between 1 and 5) -L, —log-dir Set path to logging -c, —communication-method Communication method to use (‘unix_socket’ or ‘inet_socket’) -S, —socket-path Socket path to use for ‘unix_socket’ method (filesystem path or ‘default’) -p, —port Specify a port number for ‘inet_socket’ method -t, —timeout Set time in seconds for the server to wait before it shuts down, if it has no clients connected -P, —pid-file Set path to pid file -C, —config-dir Set path to configuration -m, —module-dir Set path to modules -v, —version Report version of this program -D, —debug Output debugging information into $TMPDIR/speechd-debug if TM‐ PDIR is exported, otherwise to /tmp/speechd-debug -h, —help Print this info Please report bugs to speechd-discuss@nongnu.org
COPYRIGHT
Copyright В© 2002-2012 Brailcom, o.p.s. This is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2, or (at your option) any later version. Please see COPYING for more details.
SEE ALSO
The full documentation for speech-dispatcher is maintained as a Texinfo manual. If the info and speech-dispatcher programs are properly in‐ stalled at your site, the command info speech-dispatcher should give you access to the complete manual.
Источник
Speech dispatcher linux ��� ���
Common interface to speech synthesis
This is the Speech Dispatcher project (speech-dispatcher). It is a part of the Free(b)soft project, which is intended to allow blind and visually impaired people to work with computer and Internet based on free software.
Speech Dispatcher project provides a high-level device independent layer for access to speech synthesis through a simple, stable and well documented interface.
Complete documentation may be found in doc directory: the speech dispatcher documentation: doc/speech-dispatcher.html, the spd-say documentation: doc/spd-say.html, and the SSIP protocol documentation: doc/ssip.html.
Read doc/README for more information.
The key features and the supported TTS engines, output subsystems, client interfaces and client applications known to work with Speech Dispatcher are listed in overview of speech-dispatcher as well as voices settings and where to look at in case of a sound or speech issue.
There is a public mailing-list speechd-discuss for this project.
This list is for Speech Dispatcher developers, as well as for users. If you want to contribute the development, propose a new feature, get help or just be informed about the latest news, don’t hesitate to subscribe. The communication on this list is held in English.
Various versions of speech-dispatcher can be downloaded from the project archive.
Bug reports, issues, and patches can be submitted to the github tracker.
The source code is freely available. It is managed using Git. You can use the GitHub web interface or clone the repository from:
A Java library is currently developed separately. You can use the GitHub web interface or clone the repository from:
To build and install speech-dispatcher and all of it’s components, read the file INSTALL.
Speech Dispatcher is being developed in closed cooperation between the Brailcom company and external developers, both are equally important parts of the development team. The development team also accepts and processes contributions from other developers, for which we are always very thankful! See more details about our development model in Cooperation. Bellow find a list of current inner development team members and people who have contributed to Speech Dispatcher in the past:
- Samuel Thibault
- Jan Buchal
- Tomas Cerha
- Hynek Hanke
- Milan Zamazal
- Luke Yelavich
- C.M. Brannon
- William Hubbs
- Andrei Kholodnyi
Contributors: Trevor Saunders, Lukas Loehrer,Gary Cramblitt, Olivier Bert, Jacob Schmude, Steve Holmes, Gilles Casse, Rui Batista, Marco Skambraks . and many others.
Copyright (C) 2001-2009 Brailcom, o.p.s Copyright (C) 2018-2020 Samuel Thibault samuel.thibault@ens-lyon.org Copyright (C) 2018 Didier Spaier didier@slint.fr
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details (file COPYING in the root directory).
You should have received a copy of the GNU General Public License along with this program. If not, see https://www.gnu.org/licenses/.
The speech-dispatcher server (src/server/ + src/common/) contains GPLv2-or-later and LGPLv2.1-or-later source code, but is linked against libdotconf, which is LGPLv2.1-only at the time of writing.
The speech-dispatcher modules (src/modules/ + src/common/ + src/audio/) contain GPLv2-or-later, LGPLv2.1-or-later, and LGPLv2-or-later source code, but are also linked against libdotconf, which is LGPLv2.1-only at the time of writing.
The spd-conf tool (src/api/python/speechd_config/), spd-say tool (src/clients/say), and spdsend tool (src/clients/spdsend/) are GPLv2-or-later.
The C API library (src/api/c/) is LGPLv2.1-or-later
The Common Lisp API library (src/api/cl/) is LGPLv2.1-or-later.
The Guile API library (src/api/guile/) contains GPLv2-or-later and LGPLv2.1-or-later source code.
The Python API library (src/api/python/speechd/) is LGPLv2.1-or-later.
All tests in src/tests/ are GPLv2-or-later.
Источник