[ad_1]
As we transfer to widescale deployment, we imagine that UCS X-Sequence architectural message is resonating extraordinarily nicely as deployment suggestions has been unbelievable.
Within the first put up a part of the weblog collection, we mentioned how heterogeneous computing is inflicting paradigm shift in computing shaping UCS X-Sequence structure. On this weblog we’ll talk about the electromechanical facets of the UCS X-Sequence structure.
Formfactor stays fixed for a life cycle of the product therefore electromechanical facets that form the enclosure design are very essential in design part.
Electro-mechanical Anchors
A number of the anchors are:
Socket density per ‘RU”.
Reminiscence density
Backplane much less IO
Mezzanine choices
Volumetric area for logic
Energy footprint & supply
Airflow (Measured as CFM) for cooling
Socket and reminiscence density is essential when evaluating totally different vendor’s product and on the whole is a sign of a how effectively the platform mechanical has been designed in a given “RU” envelop. The ratio of volumetric area required for mechanical integrity vs logic is one other essential standards. These criterion helped us to zero all the way down to 7RU as chassis top & on the similar time providing extra volumetric area obtainable for logic in comparison with the related “RU” design from business.
Earlier technology of compute platforms relied on Backplane for connectivity. UCS X-Sequence, doesn’t use backplane however direct join IO Interconnects. Because the IO expertise advances , nodes and the IO Interconnect might be upgraded seamlessly as the weather that must be modified are modular and never fastened on backplane. Because the IO Interconnect pace will increase its attain decreases making it tougher and tougher to scale in electrical area. UCS X-Sequence has been designed with hybrid connector method that helps electrical IO by default and be prepared for optical IO in future. This optical IO possibility is optimized for intra chassis connectivity. Direct join IO with out a backplane helps to cut back the airflow resistance and helps to take away the warmth from inlet to outlet effectively.
Energy distribution
Rack energy density per “RU” is hitting 1KW and shortly will transcend that. Majority of the present server design makes use of nicely established 12V distribution to simplify down conversion for CPU voltages. Nonetheless as present density will increase utilizing 12V distribution would add to the connector prices, PCB layer depend and routing challenges. UCS X-Sequence, seeing the necessity for subsequent technology of server energy necessities selected to make use of larger voltage distribution of 54V as a substitute of 12V. Larger voltage distribution reduces present density by 4.5 occasions and ohmic losses by 20 occasions decrease in comparison with 12V. Shifting from 12V to 54V DC output helps in simplifying predominant PSU design and makes onboard energy distribution extra environment friendly.
Server Energy Consumption
We’re seeing CPU TDP (Thermal Design Energy) growing by 75-100W at a 2 12 months cadence stage. Compute nodes will quickly begin seeing 350W per socket and so they have to be prepared for 500W+ by 2024. A 2 Socket server with GPU and storage requires near 2.2KW energy not accounting for any distribution losses. To chill this 2 Socket server, fan modules alone will devour round 240W , 11 % of complete energy. Factoring distribution efficiencies at every intermediate stage of conversion from AC enter we’re round 2.4KW energy draw. So, in a RACK servers with 20 x 2RU servers , Fan energy alone will devour 4800W !!. Modular blade platform like UCS X-Sequence with its centralized cooling and greater followers, provide a lot larger CFM’s at a decrease energy consumption. Nonetheless fan energy consumption is certainly turning into substantial portion of the full energy price range.
Cooling
Advances in semiconductor and magnetics permits us to offer extra energy within the life time of chassis. Nonetheless, it’s troublesome to drag off a dramatic improve on airflow ( measured as fan CFM) as technological advances are sluggish & incremental. Moreover, price economics dictates use of passive warmth switch methods to chill the CPU in Server. This makes defining fan CFM necessities for cooling the compute nodes a multi variable drawback.
Not like normal rack design which makes use of unfold core CPU placement, UCS X-Sequence makes use of shadow core design precept complicating cooling even additional.
Banks of U2/E3 storage drives with energy upwards of 25W every and accelerators on entrance aspect of the blade will prohibit air going to the CPU in addition to it is going to pre-heat air.
UCS X-Sequence design approached these challenges holistically. At the beginning is the growth of the state-of-the-art fan module delivering the category main CFM. The opposite being the dynamic energy administration coupled with fan management algorithm that can adapt & evolve as cooling demand grows and ebbs. Compute nodes are designed with excessive and low CFM paths channeling applicable airflow for cooling. Moreover, energy administration choices supplies buyer with configurable knobs to optimize for fan energy or excessive efficiency mode.
Emergence of Alternate Cooling Applied sciences
Spot cooling of CPU/GPU at 350W is approaching limits of air-cooling Doubling airflow ends in 30% extra cooling however it could add 6-8 occasions extra fan energy with diminishing return.
Knowledge facilities usually are not but prepared for liquid cooling on wholesale foundation. Immersion cooling requires full overhaul of the RACK. Hyperscalers will lead early adoption cycle and finally Enterprises prospects will get there however the tipping level from air to liquid cooling continues to be unknown. Air cooling will not be going away as we nonetheless want to chill reminiscence, storage and different elements that are operationally troublesome for liquid cooling. We have to accumulate extra information and reply following essential questions earlier than liquid cooling turns into engaging.
Do we actually want liquid cooling for all RACKs or solely few RACKs which hosts excessive TDP servers.
Is liquid cooling extra for inexperienced area deployments as a method to cut back fan energy/acoustics than for high-TDP CPU/GPU enablement?
Any Compliance/mandates that targets vitality discount by sure dates in information middle?
TCO evaluation of fan energy saving vs the full price of liquid cooling deployment?
Is buyer OK to spend extra on fan energy cooling than retrofitting the infrastructure for liquid-cooling?
Is liquid cooling going to assist deploy extra excessive TDP servers with out upgrading energy to the RACK?
For ex: Saving 100W per 1U in fan energy interprets to 3.6KW (36x 1U server) extra obtainable energy
UCS X-Sequence nevertheless does help a hybrid mannequin – a mixture of air/liquid cooling when air cooling alone will not be ample. Be careful for extra particulars in upcoming blogs on liquid cooling in UCS X-Sequence.
Within the subsequent weblog, we’ll elaborate on traits drove the UCS X-Sequence inner structure.
Assets
UCS X-Sequence – The Way forward for Computing Weblog Sequence – Half 1 of three
Share:
[ad_2]