Communications Building Data Center Migration FAQs
On This Page:
Service Offering
What is the colocation data center?
The off-site colocation data center is a state-of-the-art facility. The facility provides professionally managed space and redundant power. One of the major benefits of using a colocation facility is the assurance of flexible capacity for future growth. Our campus colocation facility is located in Quincy, Washington. More information
Why is it essential to have computing and storage capabilities off campus at a colocation facility?
We do not have sufficient power capacity on campus to support our research systems. It also reduces the risk of fire, earthquakes and extended power outages.
Is the colocation facility a long-term solution?
Yes, the colocation facility is one of the prongs of the UCSC data center and cloud strategy. UCSC also has data centers in the cloud and in the physical data center in the Communications building.
How much will the colocation service cost?
Currently, there is no cost for physical hosting at the colocation facility. It is covered under the campus information user fee.
Will the cost change over time?
Currently, data center services are currently covered under the campus information user (IU) fee. While the fee is reviewed on a regular basis, there are no anticipated changes coming to IU.
Is UCSC’s colocation facility reliable?
The UCSC’s colocation facility meets the latest quality process standards set by the International Standards Organization (ISO 27001:2013_ICS Certificate and ISO 9001: 2015 Certificate).
In order for any data center to be reliable, it requires redundant power and redundant network connections. The colocation facility has these features.
- Redundant power: The colocation facility provides uninterrupted power with both redundant power supplies and generators that support the entire facility. This is an improvement in reliability compared to the on-campus data center, which has limited generator and battery backup capabilities.
- Redundant Network Connections: UCSC currently provides three separate network connections to the colocation facility, including a 100Gb CENIC circuit.
We actively monitor service providers and the network. Should there be an outage, the UCSC Data Center Operations team communicates the issues with colocation customers.
What types of support is provided by the colocation facility?
UCSC’s colocation facility has professional and experienced staff that offer hands-on support for installation and troubleshooting, including swapping parts, cycling machines, and checking connections. The colocation facility guarantees a technician will be assigned within six hours. They also offer expedited support when needed.
Local IT staff will continue to provide design, consultative, software, OS and remote support, and will work with the colocation facility to address issues should they arise.
What is the current latency between UCSC’s colocation facility in Quincy, WA and the UCSC campus?
The latency between UCSC’s colocation facility in Quincy, WA and the UCSC campus averages around 35 milliseconds roundtrip.
If you have latency requirements that are less than 25 Milliseconds or any other network questions, please contact ResearchIT@ucsc.edu.
What is the policy regarding equipment lifetime (sunsetting), at the colocation facility?
Currently there is no policy for sunsetting equipment at the colocation facility. The Research Computing and Data Infrastructure Committee, made up of research faculty and Academic Senate representatives, will be drafting a policy for consideration.
Note that all systems, regardless of location, must meet Information Security policies.
Is UCSC’s colocation facility capable of high-performance computing (HPC)?
Yes, the UCSC colocation facility is a modern facility with stable and abundant power, outside of any natural hazards, that is capable of supporting HPC. One example includes Hummingbird, the campus open access research computing cluster. If you are interested in exploring the HPC options available to you, please request a consultation.
Given that storage needs always grow over time, how will this growth be addressed both in terms of Colocation server space and the IT user fee? As costs may increase over time, is there a framework for changes in the IT user fee assessment?
The current campus Information User (IU) model supports physical hosting for free. There is a campus-wide steering committee to evaluate and determine the cost models, so costs are subject to change.
How many spare parts need to go up with a cluster and who is in charge of informing faculty/researchers that there is a need for more?
Space has been allocated at the Colocation for each department to store spare parts. You will need to work with the Divisional Liaison of your division to access the space at the Colocation. All systems must follow the ITS Architectural Review Board* (ARB) standards for power, networking and other physical requirements for data center hosting.
*The ARB aims to establish a cooperative and knowledge-rich forum where stakeholders, presenters, and ARB members can collaboratively participate in design reviews and standard-setting, ensuring the alignment of architectural decisions with ITS goals, and fostering learning, innovation, and progress at UC Santa Cruz.
Who are the approved vendors, and can equipment be deployed directly to the Colocation?
Currently there is not a list of approved vendors. Equipment can be directly shipped to the Colocation. Any direct shipments must be done in coordination with the Divisional Liaison or you can fill out this form for direct installation. Here is the form to start the installation process.
What equipment restrictions are there?
This is the baseline for purchasing hardware, based on campus ARB standards are:
- Baseline (purchased by Customer)
- Minimum of 2 SFP+/SFP28 (or in multiples of twos)
- Integrated Lights Out Manager (ILOM) . ILOM enables you to actively manage and monitor the server independently of the operating system state, providing you with a reliable Lights Out Management (LOM)
- Power supplies are purchased in multiples of two. Minimum two power supplies.
- Not Supported
- Cat6 (RJ45) Cables only for console/ILOM purposes and not for general networking
- CAT 6A/8 cables are not supported
- Single power supply servers. Example: Mac Minis are not supported
- Servers without a console port, (e.g. ILOM)
- Servers/devices that are not rack-mountable in a 19” cabinet. Deskside computers are not rack-mountable
- Campus Main Data Center: Hardware Depth
- 30” Depth limit for servers/storage/devices in the Campus Data Center
Are the shipment costs handled by ITS?
We recommend shipping directly to the Colocation from the vendor.
Does the UCSC Baskin Engineering LDAP server handle user management in the Colocation?
No. All systems shall use central authentication services.
Installation
What are the steps and associated timeline to have equipment received on campus being available in the colocation facility in production mode?
To begin the process of shipping and installing a system at the colocation facility, fill out the physical hosting request form. ITS will work with you to ensure connectivity and work through the steps to ship and install the equipment.
The timeline for systems shipped from UCSC is dependent on the complexity of the system and networking requirements. For a simple system, you can expect time from shipping to physical installation to be within 7-10 days.
For systems that require consultation for the engineering of new network configurations and solutions, the time may take longer.
Users can expedite a system being available in production by shipping directly to the colocation facility. Please work with your Divisional Liaison or Research IT to facilitate shipping equipment to the colocation facility
I have a small scale system (1-5 servers). Can ITS assist me with shipping and deploying these servers?
Yes, small scale systems are easy to accommodate. Fill out the physical hosting request form to kick-off the process with ITS. The process will vary depending on whether the hardware is being shipped directly from the manufacturer/seller, or if the hardware equipment is already on campus.
If you are shipping and deploying more complex systems, the physical hosting request form should also be used to start the process. Please consult with your Divisional Liaison or Research IT for questions concerning this process.
What flexibility is there to build clusters in stages?
Clusters can be built in stages in the colocation facility. Unlike the finite capacity of the data centers on the UCSC campus, the colocation facility provides the ability to expand.
ITS can assist with the technical design, proposals for funding, and cluster deployment. We encourage you to communicate your vision and future plans for expansion as soon as you know about them. For assistance, please contact your Divisional Liaison or Research IT.
Auxiliary Capabilities
What options are there for high-volume storage of research data at UCSC’s colocation facility?
ITS is exploring solutions to support all phases of the research computing lifecycle, including general purpose high-volume storage solutions, in partnership with the Office of Research and academic divisions.
Historically the data center hasn’t had the capacity to support high volume shared storage solutions. Expanding our service using colocation gives us opportunities to explore supporting faculty needs in this area.
Note that some academic divisions may have local solutions, and encourage you to speak with your Divisional Liaison for additional information.
How do I grant access to my resources at the colocation facility for my external research partners?
Just like on-campus systems, UCSC affiliates can request Sundry accounts which external research partners can use to access resources at the colocation facility.
Can I have access to a VM so that I can install my own software?
Yes, we have a variety of platforms that provide virtual machines. Administrative privileges are dependent on the platform and the application. For more information and contact information see Virtual Hosting Services.
What is the maximum power draw per 1U? Are multi-GPU servers supported?
Yes, multi-GPU systems have been deployed at the Colocation. There isn’t a maximum power draw per 1U. We have a limit of 15kW per cabinet. Once the 15kW maximum is reached in a cabinet we will install the remaining equipment in additional cabinets.
How can 10Gb networking be requested? What are the costs? Is 40Gb/100Gb available?
We currently offer 10Gb fiber connectivity at the Colocation. So there is no need to specially request 10Gb. All servers will need a SFP to be able to accept a 10Gb fiber connection. We do not provide connectivity above 10Gbps for hosted equipment in the Colocation facility.