IT Project: Communications Building Data Center Migration
This project is one part of ITS’s efforts to evolve the university’s compute and data storage capabilities and services to meet the needs of the UCSC research, academic, and administrative communities, including relocating services and services in the Communications Building to better suited facilities.
Goals and objectives
- Build resiliency with redundant power and connectivity.
- Give back data center space to campus by consolidating systems or storage in modernized data centers on campus, and various off-campus locations
- Compliance with industry-standard guidelines can help us scale infrastructure and systems with lean support models.
- Save costs by minimizing distributed cooling, racks, and networking
- Meet decarbonization goals for the power supply
- Improve security monitoring and controls
- Improve disaster recovery by mitigating disaster risks identified.
- Evolve the network at various data center locations to fully support a variety of requirements including those for academic research and instructional systems (example: Science DMZ, high-performance/ high-speed, and low-latency requirements).
Alignment with campus goals
- Improve efficiency, effectiveness, and resilience
- Increase UC Santa Cruz’s research profile and impact
Approach
Project goals will be achieved through a phased multifaceted approach with three phases that embrace various options including campus consolidation, relocating/migrating equipment based on the research, instructional, or enterprise workload needs with the goal of optimizing performance, redundancy, and disaster recovery for each system.
Timeline
People
Sponsors
- Executive sponsor: Aisha Jackson
- Program leadership: Byron Walker
Program team
- Program manager: Pam Swain, ITS
- Governance and policy: Shawn Duncan, ITS
- ITS Divisional & Research Liaisons: Jay Olson, Hart Hancock, Angie Steele, Jeffrey Weekley, Paul Sosbee, Tania Pannabecker, Mike Nardell, Yuri Cantrell
Project managers
- Pam Swain
- Jocelyn Ferialdi
- Heather Sevedge
- Geoff Smith
- Stephanie Nielson
- John Lallemand
Contact and feedback
- Program Manager: Pam Swain
Frequently asked questions
General
What is the colocation data center?
The off-site colocation data center is a state-of-the-art facility. The facility provides professionally managed space and redundant power. One of the major benefits of using a colocation facility is the assurance of flexible capacity for future growth. Our colocation facility is located in Quincy, Washington. Learn more about Physical Hosting
Is the colocation facility a long-term solution?
Yes, the colocation facility is one of the prongs of the UCSC data center and cloud strategy. UCSC also has data centers in the cloud and in the physical data center in the Communications building.
Why is it essential to have computing and storage capabilities off campus at a colocation facility?
We do not have sufficient power capacity on campus to support our research systems. It also reduces the risk of fire, earthquakes and extended power outages.
Cost
How much will the colocation service cost?
Currently, there is no cost for physical hosting at the colocation facility. It is covered under the campus information user fee.
Will the cost change over time?
Currently, data center services are currently covered under the campus information user (IU) fee. While the fee is reviewed on a regular basis, there are no anticipated changes coming to IU.
Are the shipment costs handled by ITS?
We recommend shipping directly to the Colocation from the vendor.
How will this growth be addressed both in terms of Colocation server space and the IT user fee? As costs may increase over time, is there a framework for changes in the IT user fee assessment?
The current campus Information User (IU) model supports physical hosting for free. There is a campus-wide steering committee to evaluate and determine the cost models, so costs are subject to change.
Performance
Is UCSC’s colocation facility reliable?
The UCSC’s colocation facility meets the latest quality process standards set by the International Standards Organization (ISO 27001:2013_ICS Certificate and ISO 9001: 2015 Certificate).
In order for any data center to be reliable, it requires redundant power and redundant network connections. The colocation facility has these features.
- Redundant power
The colocation facility provides uninterrupted power with both redundant power supplies and generators that support the entire facility. This is an improvement in reliability compared to the on-campus data center, which has limited generator and battery backup capabilities. - Redundant Network Connections
UCSC currently provides three separate network connections to the colocation facility, including a 100Gb CENIC circuit.
What is the current latency between UCSC’s colocation facility in Quincy, WA and the UCSC campus?
Around 35 milliseconds roundtrip.
If you have latency requirements that are less than 25 Milliseconds or any other network questions, please contact ResearchIT@ucsc.edu.
We actively monitor service providers and the network. Should there be an outage, the UCSC Data Center Operations team communicates the issues with colocation customers.
Is UCSC’s colocation facility capable of high-performance computing (HPC)?
Yes, the UCSC colocation facility is a modern facility with stable and abundant power, outside of any natural hazards, that is capable of supporting HPC. One example includes Hummingbird, the campus open access research computing cluster. If you are interested in exploring the HPC options available to you, please request a consultation.
What equipment restrictions are there?
This is the baseline for purchasing hardware, based on campus ARB standards are:
- Baseline (purchased by Customer)
- Minimum of 2 SFP+/SFP28 (or in multiples of twos)
- Integrated Lights Out Manager (ILOM) . ILOM enables you to actively manage and monitor the server independently of the operating system state, providing you with a reliable Lights Out Management (LOM)
- Power supplies are purchased in multiples of two. Minimum two power supplies.
- Not Supported
- Cat6 (RJ45) Cables only for console/ILOM purposes and not for general networking
- CAT 6A/8 cables are not supported
- Single power supply servers. Example: Mac Minis are not supported
- Servers without a console port, (e.g. ILOM)
- Servers/devices that are not rack-mountable in a 19” cabinet. Deskside computers are not rack-mountable
- Campus Main Data Center: Hardware Depth
- 30” Depth limit for servers/storage/devices in the Campus Data Center
Support
What types of support is provided by the colocation facility?
UCSC’s colocation facility has professional and experienced staff that offer hands-on support for installation and troubleshooting, including swapping parts, cycling machines, and checking connections. The colocation facility guarantees a technician will be assigned within six hours. They also offer expedited support when needed.
Local IT staff will continue to provide design, consultative, software, OS and remote support, and will work with the colocation facility to address issues should they arise.
What is the policy regarding equipment lifetime (sunsetting), at the colocation facility?
Currently there is no policy for sunsetting equipment at the colocation facility. The Research Computing and Data Infrastructure Committee, made up of research faculty and Academic Senate representatives, will be drafting a policy for consideration.
Note that all systems, regardless of location, must meet Information Security policies.
How many spare parts need to go up with a cluster and who is in charge of informing faculty/researchers that there is a need for more?
Space has been allocated at the Colocation for each department to store spare parts. You will need to work with the Divisional Liaison of your division to access the space at the Colocation. All systems must follow the ITS Architectural Review Board* (ARB) standards for power, networking and other physical requirements for data center hosting.
The ARB aims to establish a cooperative and knowledge-rich forum where stakeholders, presenters, and ARB members can collaboratively participate in design reviews and standard-setting, ensuring the alignment of architectural decisions with ITS goals, and fostering learning, innovation, and progress at UC Santa Cruz.
Who are the approved vendors, and can equipment be deployed directly to the Colocation?
Currently there is not a list of approved vendors. Equipment can be directly shipped to the Colocation. Any direct shipments must be done in coordination with the Divisional Liaison or you can fill out this form for direct installation. Here is the form to start the installation process.
Does the UCSC Baskin Engineering LDAP server handle user management in the Colocation?
No. All systems shall use central authentication services.
Installation
What are the steps and associated timeline to have equipment received on campus being available in the colocation facility in production mode?
To begin the process of shipping and installing a system at the colocation facility, fill out the physical hosting request form. ITS will work with you to ensure connectivity and work through the steps to ship and install the equipment.
The timeline for systems shipped from UCSC is dependent on the complexity of the system and networking requirements. For a simple system, you can expect time from shipping to physical installation to be within 7-10 days.
For systems that require consultation for the engineering of new network configurations and solutions, the time may take longer.
Users can expedite a system being available in production by shipping directly to the colocation facility. Please work with your Divisional Liaison or Research IT to facilitate shipping equipment to the colocation facility
I have a small scale system (1-5 servers). Can ITS assist me with shipping and deploying these servers?
Yes, small scale systems are easy to accommodate. Fill out the physical hosting request form to kick-off the process with ITS. The process will vary depending on whether the hardware is being shipped directly from the manufacturer/seller, or if the hardware equipment is already on campus.
If you are shipping and deploying more complex systems, the physical hosting request form should also be used to start the process. Please consult with your Divisional Liaison or Research IT for questions concerning this process.
What flexibility is there to build clusters in stages?
Clusters can be built in stages in the colocation facility. Unlike the finite capacity of the data centers on the UCSC campus, the colocation facility provides the ability to expand.
ITS can assist with the technical design, proposals for funding, and cluster deployment. We encourage you to communicate your vision and future plans for expansion as soon as you know about them. For assistance, please contact your Divisional Liaison or Research IT.
Other capabilities
What options are there for high-volume storage of research data at UCSC’s colocation facility?
ITS is exploring solutions to support all phases of the research computing lifecycle, including general purpose high-volume storage solutions, in partnership with the Office of Research and academic divisions.
Historically the data center hasn’t had the capacity to support high volume shared storage solutions. Expanding our service using colocation gives us opportunities to explore supporting faculty needs in this area.
Note that some academic divisions may have local solutions, and encourage you to speak with your Divisional Liaison for additional information.
How do I grant access to my resources at the colocation facility for my external research partners?
Just like on-campus systems, UCSC affiliates can request Sundry accounts which external research partners can use to access resources at the colocation facility.
Can I have access to a VM so that I can install my own software?
Yes, we have a variety of platforms that provide virtual machines. Administrative privileges are dependent on the platform and the application. For more information and contact information see Virtual Hosting Services.
What is the maximum power draw per 1U? Are multi-GPU servers supported?
Yes, multi-GPU systems have been deployed at the Colocation. There isn’t a maximum power draw per 1U. We have a limit of 15kW per cabinet. Once the 15kW maximum is reached in a cabinet we will install the remaining equipment in additional cabinets.
How can 10Gb networking be requested? What are the costs? Is 40Gb/100Gb available?
We currently offer 10Gb fiber connectivity at the Colocation. So there is no need to specially request 10Gb. All servers will need a SFP to be able to accept a 10Gb fiber connection. We do not provide connectivity above 10Gbps for hosted equipment in the Colocation facility.