Compute Node Tray

CS500 model 3211 compute module (tray).

Each 1U compute module (tray) operates as a single compute node within the CS500 3211 chassis and plugs into a backplane.

Make sure cables are routed correctly before plugging the node back into the chassis when adding or removing components from the compute node. Use caution and make sure cables or wires are not pinched and do not block airflow.
Figure: 3211 Compute Node Tray

Power docking board

The power docking board provides hot swap docking of 12V main power between the compute node motherboard and the power supplies. The power docking board from each compute node plugs into the backplane. The bridge board on each compute node also plugs into the HDD backplane.

Depending on the compute node model, one of the following power docking boards is used to enable hot swap support of the compute node into or out of the 3211 chassis:
  • Standard power docking board
  • SAS/NVMe combination power docking board
The power docking board implements the following features:
  • Main 12V hot swap connectivity between compute node tray and chassis power distribution boards.
  • Current sensing of 12V main power for use with node manager.
  • Three 8-pin dual rotor fan connectors.
  • Four screws secure the power docking board to the compute node tray.
Figure: 3211 Power Docking Boards

Bridge Board

The bridge board extends motherboard I/O signals by delivering SATA/SAS/NVMe signals, disk backplane management signals, BMC SMBus signals, control panel signals, and various compute node specific signals. The bridge board provides hot swap interconnect of all electrical signals to the chassis backplane (except for main 12V power). One bridge board is used on each compute node. The bridge board is secured to the compute node tray with six screws through the side of the tray. A black, plastic mounting plate at the end of the bridge board protects and separates the bridge board from the side of the tray.

There are different bridge board options to support the different drive options in the front of the server. Dual processor system configurations are required to support a bridge board with 12Gb/s SAS support. The 12Gb/s SAS bridge boards are not functional in a single processor system configuration.

Figure: 3211 Bridge Boards

System Fans

The three dual rotor 40 x 40 x 56 system managed fans provide front to back air flow through the compute node. Each fan is mounted within a metal housing on the compute node base. System fans are not held in place using any type of fastener. They are tightly held in place by friction, using a set of four blue sleeved rubber grommets that sit within cutouts in the chassis fan bracket.

Each system fan is cabled to separate 8-pin connectors on the power docking board. Fan control signals for each system fan are then routed to the motherboard through a single 2x7 connector on the power docking board, which is cabled to a matching fan controller header on the motherboard.

Each fan within the compute node can support variable speeds. Fan speed may change automatically when any temperature sensor reading changes. Each fan connector within the node supplies a tachometer signal that allows the baseboard management controller (BMC) to monitor the status of each fan. The fan speed control algorithm is programmed into the motherboard’s integrated BMC.

Compute nodes do not support fan redundancy. Should a single rotor stop working, the following events will most likely occur:
  • The integrated BMC detects the fan failure.
  • The event is logged to the system event log (SEL).
  • The System Status LED on the server board and chassis front panel will turn flashing green, indicating a system is operating at a degraded state and may fail at some point.
  • In an effort to keep the compute node at or below pre-programmed maximum thermal limits monitored by the BMC, the remaining functional system fans will operate at 100%.

Fans are not hot swappable. Should a fan fail, it should be replaced as soon as possible.

Air Duct

Each compute node requires the use of a transparent plastic air duct to direct airflow over critical areas within the node. To maintain the necessary airflow, the air duct must be properly installed and seated before sliding the compute node into the chassis.
Figure: 3211 Compute Module Air Duct
In system configurations where CPU 1 is configured with an integrated Intel® Omni-Path HFI, an additional plastic air baffle is attached to the bottom side of the air duct. The air baffle must be attached to the air duct to ensure proper airflow to the chip set and the Intel Fabric Through (IFT) carrier board when installed.
Figure: Air Baffle Addition