The AI Institute for Intelligent Cyberinfrastructure with Computational Learning in the Environment, or ICICLE, will focus on next-generation intelligent cyberinfrastructure that makes using AI as easy as plugging an appliance into an electrical outlet.
The NSF Awards $5M to the San Diego Supercomputer Center for its Prototype National Research Platform, a first-of-its-kind cyberinfrastructure ecosystem intended to help science drivers expedite science and enable transformative discoveries.
The Biden administration has launched the National Artificial Intelligence Research Resource Task Force, on which San Diego Supercomputer Center Director and Physics Professor Michael Norman will participate to help democratize access to resources and tools that fuel AI research and development.
The title of an Old West anthem and a Disney film, “Home on the Range” loosely describes the work of spatial ecologists. These field experts study the movements of animals within a specific geographic area—their “home range.”
Developing improved materials for things such as energy storage and drug discovery is of interest to researchers and society alike. Quantum mechanics is the basis for molecular and materials scientists who develop these useful, futuristic products.
Long-time Director Michael Norman is stepping down and SDSC’s Distributed High-Throughput Computing Lead and Physics Professor Frank Würthwein is stepping into the role until a new permanent director is named.
The San Diego Supercomputer Center and CERN team up and leverage an alliance with Strategic Blue, a UK-based Fintech company that helps organizations optimize procurement of cloud services.
With hundreds of active installations and a new award, SeedMeLab is launching its new Software-as-a-Service to benefit researchers.
San Diego Supercomputer Center’s Comet will conclude formal service as an NSF resource and transition to exclusive use by the Center for Western Weather and Water Extremes.
SDSC’s Expanse platform via Core Scientific’s Plexus software stack offers users a consumption-based, high-performance computing model that solves for on-premise infrastructure and can run HPC workloads in supercomputer centers as well as in any of the major public cloud providers.