Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Localization of Spoken and Written Text

Overview

In CryENGINE2, localization is realized in terms of text and sound. All the data that is necessary for localization is stored inside a pak file, named similar to the contained language in the Game/Localized folder (for example, Game/Localized/english.pak). The structure of these pak files is the same for each language. Inside the pak files, all files contained in the Languages folder, except dialog_recording_list.xml and ai_dialog_recording_list.xml, can be directly translated. The latter two files are used by the dialog system and thus need further explanation.

The

g_language

console variable can be used to load a specific language, which is addressed by the name of the language pak file (for example, english for english.pak). The dialog recording lists as well as translated text are loaded and handled by the LocalizationManager class. Make sure that new files are correctly loaded in the CSystem::OpenLanguagePak() function.

Translating Spoken Text

As the XML files mentioned above contain all the text that is spoken in a game, they may become huge and are best viewed inside a table calculation software, such as Excel. Inside the files, each line represents sentences spoken by a character. A detailed description of single parameters of these lines can be found in The Dialog System. Sentences spoken by different characters can be composed to a dialog by the Dialog Editor as an additional step, for further use in TrackView scenes or Flow Graphs.

The AUDIO_FILENAME row, which is also called sound-key, is a unique identifier for spoken sentences. It directly references the audio file in the dialog folder, with the Languages folder and the file extension being skipped. This sound-key can be copied into the Dialog Editor or any dialog sound field, because it is a valid sound name. There are two different files needed for each line specified in the XML table, which are automatically loaded by the engine; one optional and one mandatory.

  • .FSQ - This file contains the facial animation data, applied to the character that is speaking. It must be generated in the Facial Editor, depending on the audio source. To speed up the translation process, these files may be taken as a basis for lip synchronization and taken over without any change. Dialog lines that are spoken offscreen do not need the FSQ file.
  • .WAV*, .MP2, or .MP3 - This file contains the audio source. Its extension must be consistently of only one format as the engine allows for setting up the decoding mechanism globally (by the *s_MPEGCompression console variable). For example, even if a sound is referenced to use the WAV extension internally, the MP2 file will be loaded if s_MPEGCompression is set to 2. Ideally, all dialog references should point to WAV files. If so, the compression of choice can be selected and easily changed.

In the simplest case, translating the audio data is just a matter of translating the audio files contained in the structure described above.

  • No labels