We aim at solving REAL problems on REAL commodity systems.

People

Cloud Computing

CNGrid Project

Other Activities

 
 
 

HKU Grid Point: Computing Resource

  • CS/SRG: Gideon-I, Gideon-II (SRG-IB + SRG-GbE + SRG-GPU)

  • Computer Center: hpcpower, hpcpower2, MDRP-I, MDRP-II

SRG-IB Cluster (48 nodes): Dell PowerEdge R610 rack-mounted server

  • CPU: 2 x Intel Quad-Core E5540 Xeon CPU, 2.53GHz, 8MB cache

  • DRAM: 32GB DDR3 memory, 1066MHz, dual ranked UDIMMs

  • Disk: 2 x 300GB 10K RPM SAS hard disks, running in RAID-1

  • NIC: Qlogic QLE7240-CK single-port 20Gb Infiniband adaptor

SRG-GbE Cluster (64 nodes):  Dell PowerEdge M610 blade server

  • CPU: 2 x Intel Quad-Core E5540 Xeon CPU, 2.53GHz, 8MB cache

  • DRAM: 16GB DDR3 memory, 1066MHz, dual ranked UDIMMs

  • Disk: 2 x 250GB 7.2K RPM SATA hard disks, running in RAID-1

  • NIC: Broadcom 5709 dual-port Gigabit Ethernet adaptor

SRG-GPU Cluster (12 nodes): IBM System X iDataPlex dx360 M3 server

  • CPU: 2 x Intel 6-Core X5650 Xeon CPU, 2.66GHz, 12MB cache

  • DRAM: 48GB ECC DDR3 memory, 1333MHz

  • GPU: 1 x NVIDIA Tesla M2050 GPU computing module, PCIe x16 Gen 2, 3GB GDDR5

  • Disk: 1 x 250GB 7.2K RPM simple-swap SATA II hard disk

  • NIC: Qlogic QLE7340 single-port 4X QDR IB x8 PCI-E 2.0 HCA

Gideon-I Cluster (128 nodes): Pentium4 PCs

  • CPU: Pentium4 2.0 GHz, 512 Kbytes L2 cache

  • DRAM: 512 MB or 1 GB (PC2100) DDR SDRAM

  • Disk: 40 GB IDE hard disk

  • NIC: Fast-Ethernet adaptors x2

hpcpower Cluster (128 nodes): IBM x335 rack-mount server

  • CPU: 2 x Intel Pentium Xeon CPU, 2.8 GHz

  • DRAM: 2 GB RAM

  • Disk: 40GB IDE hard disk

  • NIC: Dual integrated 10/100/1000 Ethernet ports

hpcpower2 Cluster (24 nodes): IBM HS21 blade server

  • CPU: 2 x Intel quad-Core Xeon CPU, 3GHz, 12MB L2 cache

  • DRAM: 8GB

  • Disk: 2 x 146GB SAS hard disks

  • NIC: Dual integrated 10/100/1000 Ethernet ports

MDRP-I Cluster (128 nodes):  Dell M610 blade server

  • CPU: 2 x Intel quad-Core Intel Nehalem CPU, 2.53GHz

  • DRAM: 32GB (112 nodes) / 16GB (16 nodes)

  • Disk: 2 x 250GB SATA hard disks

  • NIC: 4X DDR InfiniBand adaptor for IB-enabled nodes / Dual integrated 100/1000 Ethernet ports

MDRP-II Cluster (16 nodes): IBM BladeCenter Server HS22 blade server

  • CPU: 2 x Intel 6-Core Westmere CPU, 2.66GHz

  • DRAM: 48GB

  • Disk: 2 x 300GB SATA hard disks

  • NIC: 4X QDR InfiniBand adaptor for IB-enabled nodes / Dual integrated 100/1000 Ethernet ports

 

Last Update: March 02, 2011
 

Copyright HKU CS Department 2009-2011