TRIUMF Grid Status Pages

TRIUMF @ Service Challenge 4

This is the status of our SC 4 cluster as of August 2006.

Hardware

Servers

  • 3 EMT64 dCache pool node systems, each with:
    • 2 GB memory
    • hardware raid - 3ware 9xxx SATA raid controller
    • Seagate Barracuda 7200.8 drives in hardware raid 5 - 8 x 250 GB
  • 2 EMT64 dCache pool node systems, each with:
    • 4 GB memory
    • hardware raid - 3ware 9xxx SATA raid controller
    • Seagate Barracuda 7200 drives in hardware raid 5 - 8 x 500 GB
  • 1 dual Opteron 246 dCache headnode server with:
    • 2 GB memory
    • 3ware 9xxx SATA raid controller
    • WD Caviar SE drives in hardware raid 0 - 2 x 250 GB
    • a 4560-SLX IBM Tape Library (currently with 2 SDLT 320 tape drives)
  • 1 EMT64 system used as an FTS Server with:
    • 2 GB memory
    • 3 SCSI 73 GB drives for the OS and for Oracle's needs.
    • a 4560-SLX IBM Tape Library (currently with 2 SDLT 600 tape drives)
  • 2 EMT64 systems used as LFC and VOBOX Servers with:
    • 2 GB memory
    • 2 SATA 160 GB drives in a Raid-1 configuration

Storage

  • 11+ TB disk
  • 15+ TB tape

Software

  • dCache/SRM - deployed in June 2005
  • FTS / Oracle - deployed late July 2005
  • LFC / VOBOX - deployed in October 2005
  • gLite 3.0 - deployed in May 2006

Networking

Tape Transfer Status

We used a home-brewed HSM tape interface for tape transfer tests.  The tape transfer status page for the period from April 19 - May 4 is linked here.  The current SC4 tape status page is here.

Tier 2 Status

To-Do List

  • an Oracle RAC will be order this week (week 35)
  • 2 blade chassis of worker nodes have been ordered.