Procedure Infrastructure Deployment
From IMarine Wiki
Different deployment procedures apply for gCube, gLite/UMD, Hadoop and Runtime Resources.
The gCube nodes of the D4Science Ecosystem can be deployed in 32 and 64 bits machines and supports several Linux distributions. It has been tested on CERN Scientific Linux, RedHat Enterprise, Ubuntu, and Fedora.
A gCube node of the D4Science Ecosystem is composed by two main constituents:
- A base gHN-distribution or SmartGears distribution Managed locally by Site Managers;
- gCube services running on the gHN or on the Smartgears container. Managed remotely by VO Admins and the VRE Managers.
- gHN - The gHN distribution is available from the gCube website. The Administrator Guide provides detailed information about the gHN installation process.
- SmartGears - The SmartGears distribution is available from the gCube website. The Smargears installation guide provides information about the Smartgears installation process
- gCube Service - gCube services are installed when new VOs/VREs are deployed. Check the VO Creation and VRE Creation procedures.
- gHN and SmartGears - The upgrade of gHNs is based on upgrade plans published in the Resources Upgrade page. Upgrades are announced via the WP5 mailing list.
- gCube - The upgrade of gCube services is based on upgrade plans published in the Resources Upgrade page. Upgrades are announced via the WP5 mailing list.
In order to coordinate Installation and Upgrade activities the Infrastructure Managers use the iMarine TRAC . For each activity the Infrastructure Managers should open a TRAC ticket of type infrastructure describing the activity to perform and assign it to a Site Managers with a Due Date. Site Managers responsible of the tasks when closing the ticket are supposed to fill the field Intervention Time with the time spent performing the task. Tickets associated with installation and upgrades are also reported in the Resources Upgrade page. More information are available on the Infrastructure upgrade wiki
The UMD middleware is composed by several components providing different grid services for distributed computing and storage. The latest stable release is UMD 1.5. This release is certified to run on CERN Scientific Linux 5. All UMD components run on x86_64 architecture
UMD components are in general expected to run on dedicated machines. However it maybe be possible to have some UMD components co-existing with other UMD/gCube nodes. For example the UMD Worker Node can be installed in the same machine of a gCube node.
The UMD nodes of the D4Science Ecosystem run the following gLite components: Cream_CE, WN, DPM_SE, WMS, LB, VOMS, UI and Apel
The default installation method for SLC5 packages is the YUM tool. All gLite components have a YUM meta-packages associated.
The configuration of UMD nodes is performed by a set of shell scripts provided by the YAIM framework . The provided configuration scripts can be used by Site Managers with no need for in-depth knowledge of specific middleware configuration details. They must adapt some configuration files, according to provided examples. The resulting configuration is a default site configuration.
Upgrades in UMD are done on a per component basis. Each upgrade is associated with one web page containing the details of the upgrade and the list of affected components. All updates are announced via the EGI CIC Portal broadcast tool and are requested on a regular basis. Sites are asked to keep their installations up-to-date with respect to the latest update released.
Detailed instructions about the upgrade of gLite can be found in the UMD Installation Guide.
Hadoop and Runtime Resources
Hadoop and Runtime Resources installation and upgrades due to the diverse nature of Service and installation type does not follow a predefined installation or upgrade procedure, but as for the gCube Resources each action is associated with a TRAC ticket of type infrastructure where Site Managers have to report the Intervention Time spent.