Skip to Content

FAQ

Printer-friendly versionPrinter-friendly version

Version 3, as of Oct 2011

As commissioning at the end of 2011 approaches, there will be increasing interest and questions from prospective customers. We will continue to add to this page with answers to the questions we most often hear. As more is learned, existing questions will be fleshed out in greater detail.

What is the profile of UCSB computing space right now?

With North Hall under construction, the CNSI computer room is the only one on campus having the majority of characteristics common to a data center - space, efficiency, purpose built design. There are many other server rooms that have been in place for years, but all are from space converted from the original assignment. OIST has an advanced server room in SAASB as a technology demonstrator.

What is the profile of UCSB computing equipment right now?

From what we currently know about existing UCSB computing in cpu horsepower terms, around 10% of all server power is used for non-research (teaching and business), and about 90% is research. Based upon that, we expect a similar mix of customers for North Hall. Due to the older nature of much of the non-research related equipment, there is a bit over 30 racks of that type. A survey of research computing in 2005/2006 showed a bit over 45 racks. With technology running over 8 times denser in 2010 commodity computing, the entire business of the university can now be done in less than 4 racks worth of space. Physically, one can now buy up to almost 1000 cpu cores per rack.

Who can use North Hall?

North Hall is available for all of UCSB's research, teaching and business IT communities.

What is the general goal?

That UCSB have a campus data center that is built and operated in the generally accepted manner of managed data center space. This means that there are standards for the size and type of equipment, the amount and type of power consumed, the cooling methods employed, the physical security standards, and the operational and governance processes of a campus resource. It is understood that a managed data center employing modern technology standards will provide an economy of scale that is considerably improved over the placement of equipment throughout campus in rooms not designed to be data center space.

What is the governance model?

The Cyberinfrastructure committee chaired by VCR Witherell recommended that an executive governance group be formed. To that end, Michael Witherell, Gene Lucas, Tom Putnam, Todd Lee and others have been meeting regularly. The initial focus has been on the big issues of a usable baseline service, the finance of such a service, and a best effort first pass at alignment of the service with available resources. It is assumed that once operational and additional insights are gained, the governance group will focus on improved alignment of service and resources.

When will North Hall be open for business?

Best estimate from the Design and Construction folks is now December 2011.

Who do I contact for further discussion?

Service delivery models and pricing are not yet completely established. However, questions regarding hosting individual servers within pre-provisioned rack space may be directed to Jamie Sonsini, Manager of UOSG at Jamie.Sonsini@isc.ucsb.edu

Exploration of larger scale projects involving multiple servers and technologies may be directed to Arlene Allen at arlene.allen@isc.ucsb.edu. All projects of significant scale will be provided with personal service for design and customization.

What are the expected hours of operation?

Based upon the current financial model under discussion within senior management, there will be a facility manager available during regular weekday business hours from 8am to 5pm and closed in the noon hour.

How much space will there be?

There is room for approximately 110 42U racks of the AR3100 style and dimensions. These are 24" in width and 42" in depth. There will be racks entirely occupied by a single customer and racks that are shared by multiple customers. All racks are locking front and back. Even if all computing on campus was condensed into this one space, there would still be plenty of growth room available.

What will this space be like?

Based upon the physical characteristics of the room, we have placed most IT equipment into 3 basic tiers -

  1. Collections of equipment that result in up to 5kW of power per rack.
  2. Collections that run from 5kW to 18kW per rack.
  3. Collections over 18kW.

The average performance server in 2011 tends to be 2U in height and consume around 450w. Legacy disk storage, tape storage, and most networking gear use considerably less power per unit space. As such, it is not uncommon to have a full rack of equipment staying within the 5kW. The typical commodity HPC server for research computing will be around 400w and 1U of rack space. 38 of these in a single rack will be around 15kW, well within the second tier.

In a managed data center, one must allocate space within the room in a balanced manner, so as to not create utility hot spots. Part of the data center provided service is the allocation of space in this optimized manner, such that UCSB derives the maximum benefit from the investment it has made in the facility.

What is provided by the data center?

All racks and power distribution down to the strips within the racks are provided by the facility. Basic 1 Gb network ports are available for each piece of installed customer equipment. Racks are mounted on ISO Bases exceeding Zone 4 temblor standards. A data center grade dry pipe fire suppression system is installed throughout the room. Unistrut overhead utilities troughs are in place for running customer cabling between racks. Basic HVAC is provided from cutouts in the elevated flooring. A card key security system, video surveillance and locking racks provide the basic security framework.

What is the security model?

North Hall will be compliant with ISO 27000 (old 17799) physical security standards. Card key locking will be used throughout. Card keys are issued to certified representatives of equipment maintaining organizations. Owners will designate the named individuals responsible for their equipment. In all cases, these individuals will be trained in the data center rules of conduct. For owners of IT gear who do not have their own staffing, there will be an available rate based service.

What's in a rack?

There will be racks filled by a single customer and racks that are shared by multiple customers. A small amount of space is reserved in the racks for network switches. Customers can directly attach their servers or other equipment to the network infrastructure provided by the data center. Power distribution strips in each rack will allow customers to plug directly into 46 available 208v C13 or C19 receptacles.

What will space cost?

A basic unit of space is 1U within a 42U rack, 38U of which will be assignable. There is no charge for use of baseline North Hall hosting. Hosting requests requiring specialized capital investment will be handled by the executive governance group. OIST will not be engaged in any rate and recharge for this service. It is conceivable that more specialized or advanced services will be developed by one or more campus IT groups, including OIST. The customary rate and recharge proposal and approval process will be used in such an instance.

What power is available?

The room is designed around 208v distribution to the rack. There will be two 30A circuits by default (L6-30R). High power applications will require individual installation of higher current 208v circuits. There is sufficient power to the building and the room even if it is completely filled. For equipment not capable of direct 208v attachment, the customer must provide a step down transformer.

What E-Power is available?

The room will be commissioned with 360kW of UPS available for equipment and cooling during regional power outages when the campus chilled water fails. A 400kW generator will back up these UPSs for continuous operation.

What networking is available?

North Hall is part of the 10 Gb core switching and fibre backbone for the campus.  From the NH building switch there will be a heirarchy of aggregation switching bringing the network down to the rows and individual racks. Larger customers with multiple racks of equipment will likely maintain their own layer of switching which then connects to the room aggregation layer. Customers with less equipment will be able to plug into rack distribution switch ports made available for their use - up to four 1Gb ports per server. Management of the networking down to the demarcation at the customer equipment will be a data center responsibility.

Are there advanced networking services?

Services such as firewalling, intrusion detection and prevention, LAN/WAN bridging, WAN extension, VPN, layer two tunnels, etc. are all beyond the scope of the baseline service. It is however, expected that these kinds of advanced services will develop in response to UCSB needs.

Will there be an IP based KVM access mechanism?

Keyboard, video, mouse has been largely deprecated by the modern server having both remote access and remote power on/off capabilities built into it. Customers with equipment that is several years old and not so equipped, will have to supply their own IP based KVM equipment.

What advanced tools are available for customer management of their equipment?

An asset management system called Rackwise, which is the same system as is used at San Diego Supercomputer Center (SDSC), will be in place. All equipment brought into North Hall will go through an intake process that logs it into the Rackwise database. This system provides both pictorial and data views of the racks and the equipment within them, and allows general Q&A for the entire data center's contents.

A power management tool called SPM will be used to view the real time power use down to the individual plug within a rack, and to aggregate the statistics on that power use. SPM also allows for remote power on/off management to each plug.

The room UPS will be software enabled to communicate power management events through the network. For customers wishing to be notified about on/off UPS, edison failure, etc. events, the appropriate program interface will be made available. It is a customer responsibility to install and maintain such software within their server environment.

 Who gets to use these tools?

Thus far, it is our intent to make the data center management software tools available to those who have been issued physical access card keys. Most tools of this ilk are not designed with a delegated administration security model in mind, so this group of individuals will have to be trained in the proper use of such tools and the various etiquettes of proper use.