San Diego Supercomputer Center
and UCSD Announce ‘Triton Resource’
Three-Pronged Program to Focus on Data Analysis, Storage, and Scalable Clusters
October 14, 2008
The San Diego Supercomputer Center (SDSC) at the University of California, San Diego today announced plans for a unique facility called the Triton Resource, a high-impact, massive data analysis and storage system that will accelerate innovation, collaboration, and discovery through the use of leading-edge research cyberinfrastructure at SDSC.
SDSC is leading the effort to build the Triton Resource focused primarily for UC and UC San Diego researchers. Specific details of Triton are still being defined by a committee of UCSD/UC users and SDSC technical specialists to achieve the best balance of storage, computing power, and memory to meet a number of cyberinfrastructure needs for world-class research. The planning committee is made up primarily of UC researchers so that Triton can be both a significant resource in and of itself, and major elements can be easily integrated into campus laboratories by taking advantage of UC San Diego’s leading multiple 10 Gigabit research network.
Full details of SDSC’s Triton Resource will be released in coming months. First runs are scheduled for early 2009, with full production to begin in the spring.
“UC San Diego is building 21st century campus research infrastructure to accelerate 21st century research and education,” said SDSC Director Fran Berman. “The Triton Resource is a unique environment that will facilitate our ability to make sense of the tsunami of data available to us, and drive solutions of the most challenging problems in science and society.”
The Triton Resource definition and acquisition is being led by Philip Papadopoulos, Director of UC Systems at SDSC. "We're engaging the science users at the earliest stage in the process so that we can build the resource that best fits their needs,” said Papadopoulos. “The group is really defining the balance of the machine and must factor in all constraints. The group represents diverse interests such as data life cycle and preservation with the library, ultra-scale three-dimensional electron microscope image reconstruction, and more extensive computational modeling in astronomy. In addition to the user-driven needs, several technology and data specialists from SDSC are essential in helping the entire team evaluate specific tradeoffs and impacts. Together, we're making Triton a unique resource for UC San Diego and the UC system."
"The Triton resource will enable us to, when needed, dynamically expand the COMPAS cluster at Scripps to support experiments in ocean and coupled ocean-atmosphere modeling at a much larger scale than our dedicated facility can support,” said Prof. Bruce Cornuelle, Director of COMPAS (Center for Observation Modeling and Prediction at Scripps) at the Scripps Institution of Oceanography at UC San Diego, and a member of the Triton definition team. “The large memory nodes will be particularly useful for carrying out certain stages of analysis in our calculation pipeline."
Specifically, Triton’s three key components will bring UC San Diego and the UC system new capabilities:
- The Data Oasis high performance storage system will assist in the practical manipulation of data across high-bandwidth paths to researchers throughout UC San Diego and the statewide UC system. This system will be fundamental to the life cycle of data including storage, management, and preservation of the deluge of data coming from research instruments and experiments that form the basis for future generations of research results. The storage facility is envisioned as having 2-4 petabytes of raw disc space including room to replicate all data.
- The Petascale Data Analysis Facility will be capable of analyzing data from the new generation of petascale computers (where one ‘petaflop’ of compute power equals one quadrillion calculations per second). This facility will address the critical need to make sense of the massive amounts of research data generated by today’s scientific instruments and high-performance computers. Preliminary specifications call for it to contain numerous large-memory or “fat” nodes, and multiple connections to storage. Using this architecture, a single node should be filled in about 60 seconds, so that large-scale data sets can quickly and easily be brought into memory, manipulated, and written out to disk.
- The scalable, shared resource “condo” cluster, or group of linked computers, will be equipped with standard compute nodes but enhanced memory capability. The cluster may be configured to operate either in a standard batch mode, or be set up to allow users to run customized software stacks at scale, with full connectivity to large-scale storage. This launch system may also serve as a “cloud” resource, and will provide a foundation for UC San Diego researchers to add individual project resources to create a shared computational facility that is professionally managed, expandable, and supports both everyday and “heroic” simulation, modeling, analysis, and computation needed for 21st century research and education.
Connectivity for the Triton Resource for UC San Diego campus laboratories will be achieved through both production and research multi-10 Gigabit networks to allow for unprecedented integration into research laboratories. Connectivity for UC researchers elsewhere will be achieved using a 10 Gigabit Ethernet campus connection completed in 2005 in a partnership with the Corporation for Education Network Initiatives in California (CENIC). The network was one of the first of its kind in the United States, and is connected to CENIC's high-performance backbone network, CalREN. The link provides state-of-the-art wide area network capacity to UC San Diego’s students, faculty, and staff, while also serving researchers who have extreme needs for large-scale data transfer and more powerful distributed collaboration.
The Triton Resource program -- named after the mythical sea god and his three-pronged trident, whose image was adopted by UC San Diego as its mascot in 1964 -- was announced by Berman as part of the dedication ceremonies for the supercomputer center’s 80,000 square-foot building addition, which will double the size of the existing facility on the northwest end of campus. The new addition, which includes a second data center at SDSC and expands a facility that is already one of the largest academic data centers in the world, incorporates innovative engineering approaches aimed at increasing the overall efficiency levels of data centers on UC campuses and elsewhere.
Jan Zverina, 858 534-5111
Warren R. Froelich, 858 822-3622