Optimizing Server Power Consumption > 자유게시판

회원가입 로그인

티로그테마를 이용해주셔서 감사합니다.

Optimizing Server Power Consumption

페이지 정보

profile_image
작성자 Pearlene
댓글 0건 조회 8회 작성일 25-05-16 02:48

본문

when choosing a server power supply unit (psu) for a data center or a server rack, one of the crucial considerations is to ensure the power supply can handle its peak power demand during peak usage hours.
One of the essential tools for achieving this goal is the concept of server single-rail derating factor, or sometimes input limitation. In this article, we will delve into the world of derating factors, aiming to clarify their role in powersupply selection and data center cabling design.

First, let us examine the practice. In electrical engineering, the term refers to the practice of reducing maximum operational output levels for a device below their rated limits, in order to guarantee reliability, اس اس آر and prevent other reliability issues. Server PSU manufacturers often incorporate derating into the design as a failsafe measure, allowing the devices to operate within safe temperature ranges, guaranteeing the performance and preventing any potential infrastructure failure.


The derating factor is the ratio of maximum rated power to the actual output capacity, in some cases expressed in decimal for easy visibility and comprehensibility. Derating can be broken down into three types:

  1. -Input line sequence voltage variation derating factor
  2. -Standard output derating
  3. -Optional internal deratic curves derating

A common derating factor found in server PSU specifications is the "Design temperature limit" value which tells you how much of the PSU's capacity is available at maximum temperature 40 degrees Celsius. Let us suppose the PSU's rated current is 24 amps and we know the derating factor at 273 Kelvin or 0 degrees Celsius is approximately 89: that means that at zero degrees the PSU has an actual output of 22.256 amperes times the PSU's voltage. Its maximum rated capacity at any temperature.

댓글목록

등록된 댓글이 없습니다.