Pipeline Service

Version 2 (kaetemi, 07/25/2012 02:55 pm) → Version 3/4 (kaetemi, 07/25/2012 02:56 pm)

h1. Pipeline Service

> +This article is part of the work in progress Pipeline V3. Documentation is written as it is being developed. It does *not* apply to the current build pipeline.+

h2. Overview

The build pipeline consists of one service application, which can be run as either master or slave. You must run at least one master, either locally or on an external build server. Slaves may be run additionally, either locally for each additional simultaneous build process you wish to run, or externally for each simultaneous build process on an external server.

All build services must be able to connect to the same network hard drive for reading leveldesign and database, as well as for writing build output. This may also be a local path if you are running all services locally on one machine.

A [[Pipeline Client]] application may be used to connect to the master pipeline service remotely. The client does not need to be able to have access to the network hard drive. The service allows the client to browse trough build file structures remotely, call commands, list build input and output errors, etcetera.

h2. Configuration

Required is a storage device that can be shared between multiple physical machines. A generic shared folder with write permissions is sufficient. There are two main directories that need to be identical across machines. The first is the @work@ directory, which contains the @database@ directory extracted from the ryzom assets archive as well as an initially empty @workspace@ directory which is used by the services to store metadata. By default this is configured as @W:@ on Windows and as @/srv/work@ on Linux. You can use drive mapping or directory linking to save work on configuration. The second is the @leveldesign@ directory, which is configured as @L:@ and @/srv/leveldesign@ on Windows and Linux respectively. This maps to the @data_leveldesign@ directory residing in a copy of the repository on the shared storage, in order to ensure consistency across all machines. The repository must be kept up to date with the repositories used by the leveldesigners. [? The repository can be configured to be automatically pulled and updated when a build is started ?]

The necessary configuration files for a basic installation are provided under @pipeline/service/example_cfg@.

Any modifications to the default @*.cfg@ files should be done under the user directory inside this folder, these will override the default values. There is a common configuration file, and one for Linux and Windows each.

Make a copy of the @example_cfg@ folder in a user directory of choice. It is allowed for configuration values to be different on different physical machines in order to accommodate for available toolsets or capabilities. Keep in mind that you may need to update your default configuration files when new features are released in the pipeline.

Modify the paths to the service executables and to your configuration directory in the @*.bat@ files for Windows and in the @*.sh@ files for Linux.

You must configure the hostname of the master service in @user/pipeline_service_user.cfg@ by adding the line @MasterAddress = "hostname";@. By default this is @localhost@. “localhost”. [? A line @MasterSlavePassword = "yourpasswordhere";@ must also be added in order for the services to acknowledge each other. ?]

In case you are not using standard directories for the @work@ and @leveldesign@ directories, modify @user/pipeline_service_user_win.cfg@ or @user/pipeline_service_user_linux.cfg@, adding in @SharedWork = "path";@ and @SharedLeveldesign = "path";@.

You can run the master service which runs with a builtin slave by using the @win_run_master.bat@ or @linux_run_master.sh@. On each build machine, for each additional build thread required, run the @linux_run_slave.sh@ or @win_run_slave.bat@.

h2. Workspace

The build workspace is configured using georges sheets. Refer to process specific manuals for more information.

h2. FAQ

h3. How to setup the pipeline locally?

* Configure the @MasterAddress@ set to @"localhost"@, and set all directories.
* Run @win_run_master.bat@ or @linux_run_master.sh@
* For each additional simultaneous process you want, run @win_run_slave.bat@ or @linux_run_slave.sh@