High Energy & Nuclear Physics (HENP) SIG October 4 th 2011 – Fall Member Meeting Jason Zurawski,...

42
High Energy & Nuclear Physics High Energy & Nuclear Physics (HENP) SIG (HENP) SIG October 4 th 2011 – Fall Member Meeting Jason Zurawski, Internet2 Research Liaison

Transcript of High Energy & Nuclear Physics (HENP) SIG October 4 th 2011 – Fall Member Meeting Jason Zurawski,...

High Energy & Nuclear Physics (HENP) SIGHigh Energy & Nuclear Physics (HENP) SIG

October 4th 2011 – Fall Member MeetingJason Zurawski, Internet2 Research Liaison

• Group Name/Future Meetings• LHCONE• DYNES• SC11 Planning• AOB

2 – 04/20/23, © 2011 Internet2

Agenda

• “HENP SIG” is too hard for people to dereference when looking at the agenda– “Physics SIG”?– “Science SIG” – more embracing …– Others?

• Alternate Proposal – Do we need a ‘LHC BoF’ – topics to focus on network support?

3 – 04/20/23, © 2011 Internet2

Group Name/Future Meetings

• Group Name/Future Meetings• LHCONE• DYNES• SC11 Planning• AOB

4 – 04/20/23, © 2011 Internet2

Agenda

5 – 04/20/23, © 2011 Internet2

LHCONE High-level Architecture

LHCONE – Early Planning

7 – 04/20/23, © 2011 Internet2

“Joe’s Solution”

• Two “issues” identified at the DC meeting as needing particular attention:

• Multiple paths across Atlantic• Resiliency

• Agreed to have the architecture group work out a solution

• LHCONE is a response to the changing dynamic of data movement in the LHC environment.

• It is composed of multiple parts:– North America, Transatlantic Links, Europe– Others?

• It is expected to be composed of multiple services– Multipoint service– Point-to-point service– Monitoring service

8 – 04/20/23, © 2011 Internet2

LHCONE Status

• Initially created as a shared Layer 2 domain.• Uses 2 VLANs (2000 and 3000) on separate

transatlantic routes in order to avoid loops.• Enables up to 25G on the Trans-Atlantic routes for

LHC traffic.• Use of dual paths provides redundancy.

9 – 04/20/23, © 2011 Internet2

LHCONE Multipoint Service

• Planned point-to-point service• Suggestion: Build on efforts of DYNES and DICE-

Dynamic service• DICE-Dynamic service being rolled out by ESnet,

GÉANT, Internet2, and USLHCnet– Remaining issues being worked out– Planned commencement of service: October, 2011– Built on OSCARS (ESnet, Internet2, USLHCnet) and

AUTOBAHN (GÉANT), using IDC protocol

10 – 04/20/23, © 2011 Internet2

LHCONE Point-to-Point Service

• Planned monitoring service• Suggestion: Build on efforts of DYNES and DICE-

Diagnostic service• DICE-Diagnostic service, being rolled out by ESnet,

GÉANT, and Internet2– Remaining issues being worked out– Planned commencement of service: October, 2011– Built on perfSONAR

11 – 04/20/23, © 2011 Internet2

LHCONE Monitoring Service

12 – 04/20/23, © 2011 Internet2

LHCONE (NA) Multipoint Service

13 – 04/20/23, © 2011 Internet2

LHCONE Pilot (Late Sept 2011)

13Mian Usman, DANTE, LHCONE technical proposal v2.0

• Domains interconnected through Layer 2 switches• Two vlans (nominal IDs: 3000, 2000)

– Vlan 2000 configured on GEANT/ACE transatlantic segment

– Vlan 3000 configured on US LHCNet transatlantic segment

• Allows to use both TA segments, provides TA resiliency

• 2 route servers per vlan– Each connecting site peers will all 4 route servers

• Keeping in mind this is a “now” solution, does not scale well to more transatlantic paths– Continued charge to Architecture group

14 – 04/20/23, © 2011 Internet2

LHCONE Pilot

15 – 04/20/23, © 2011 Internet2

LHCONE in GEANT

16 – 04/20/23, © 2011 Internet2

LHCONE in GEANT

• VLANS 2000 and 3000 for the multipoint service are configured.– Transatlantic routes, Internet2, and CANARIE all

are participating in the shared VLAN service.• New switch will be installed at MAN LAN in October.

– Will enable new connection by BNL• Peering with Univ of Toronto through the CANARIE

link to MAN LAN is complete• End sites that have direct connections to MAN LAN

are:– MIT– BNL– BU/Harvard

17 – 04/20/23, © 2011 Internet2

Internet2 (NA) – New York Status

• VLANS for multipoint service configured on 9/23.– Correctly configured shortly thereafter to prevent

routing loop – Testing on the link can start any time.

• Status of FNAL Cisco.– Resource constraints on the Chicago router have

prevented this from happening. – Port availability is the issue.

• End Sites– See diagram from this summer

18 – 04/20/23, © 2011 Internet2

LHCONE (NA) - Chicago

19 – 04/20/23, © 2011 Internet2

LHCONE (NA) - Chicago

• New York Exchange Point• Ciena Core Director and Cisco 6513• Current Connections on the Core Director:

– 11 OC-192’s– 9 1 Gig

• Current Connection on the 6513– 16 10G Ethernets– 7 1G Ethernet

20 – 04/20/23, © 2011 Internet2

MAN LAN

• Switch upgrade:– Brocade MLXe-16 was purchased with:

• 24 10G ports• 24 1 G ports• 2 100G ports

– Internet2 and ESnet will be connected at 100G.

• The Brocade will allow landing transatlantic circuits of greater then 10G.

• An IDC for Dynamic circuits will be installed.– Comply with GLIF GOLE definition

21 – 04/20/23, © 2011 Internet2

MAN LAN Roadmap

• MAN LAN is an Open Exchange Point.• 1 Gbps, 10 Gbps, and 100 Gbps interfaces on the

Brocade switch. – 40 Gbps could be available by 2012.

• Map dedicated VLANs through for Layer2 connectivity beyond the ethernet switch.

• With the Brocade the possibility of higher layer services should there be a need.– This would include OpenFlow being enabled on

the Brocade.• Dynamic services via an IDC.• perfSONAR-ps instrumentation.

22 – 04/20/23, © 2011 Internet2

MAN LAN Services

• WIX = Washinton DC International Exchange Point• Joint project being developed by MAX and Internet2 and

transferred for MAX to manage once in operation.• WIX is a state of the art international peering exchange facility, ‐ ‐ ‐

located at the Level 3 POP in McLean VA, designed to serve research and education networks.

• WIX is architected to meet the diverse needs of different networks.

• Initially, WIX facility will hold 4 racks, expandable to 12 racks as needed.– Bulk cables between the existing MAX and Internet2 suites will

also be in place.• WIX is implemented with a Ciena Core Director and a Brocade

MLXe-16. 23 – 04/20/23, © 2011 Internet2

WIX

• Grow the connections to existing Exchange Points.• Expand the facility with “above the net”

capabilities located in the suite.– Allows for easy access both domestically and

internationally

• Grow the number of transatlantic links to insure adequate connectivity as well as diversity.

24 – 04/20/23, © 2011 Internet2

WIX Roadmap

• Dedicated VLANs between participants for traffic exchange at Layer 2.

• WDC-IX will be an Open Exchange Point.• Access to Dynamic Circuit Networks such as Internet2

ION.• With the Brocade, there exists the possibility of

higher layer services, should there be a need.– Possibility of OpenFlow being enabled on the

Brocade• 1 Gbps, 10 Gbps, and 100 Gbps interfaces are

available on the Brocade switch. • 40 Gbps could be available by 2012.• perfSONAR instrumentation

25 – 04/20/23, © 2011 Internet2

WIX Services

• Group Name/Future Meetings• LHCONE• DYNES• SC11 Planning• AOB

26 – 04/20/23, © 2011 Internet2

Agenda

27 – 04/20/23, © 2011 Internet2

DYNES Projected Topology (October 2011)

• Inter-domain Controller (IDC) Server and Software– IDC creates virtual LANs (VLANs) dynamically between

the FDT server, local campus, and wide area network– IDC software is based on the OSCARS and DRAGON

software which is packaged together as the DCN Software Suite (DCNSS)

– DCNSS version correlates to stable tested versions of OSCARS. The current version of DCNSS is v0.5.4.

– Initial DYNES deployments will include both DCNSSv0.6 and DCNSSv0.5.4 virtual machines

• Currently XEN based• Looking into KVM for future releases

• A Dell R410 1U Server has been chosen, running CentOS 5.x

28 – 04/20/23, © 2011 Internet2

DYNES Hardware

• Fast Data Transfer (FDT) server– Fast Data Transfer (FDT) server connects to the disk

array via the SAS controller and runs the FDT software– FDT server also hosts the DYNES Agent (DA) Software– The standard FDT server will be a DELL 510 server with

dual-port Intel X520 DA NIC. This server will a PCIe Gen2.0 card x8 card along with 12 disks for storage.

• DYNES Ethernet switch options:– Dell PC6248 (48 1GE ports, 4 10GE capable ports (SFP+,

CX4 or optical)– Dell PC8024F (24 10GE SFP+ ports, 4 “combo” ports

supporting CX4 or optical)

29 – 04/20/23, © 2011 Internet2

DYNES Hardware

• http://www.internet2.edu/ion/hardware.html • IDC

– Dell R410 1U Server– Dual 2.4 GHz Xeon (64 Bit), 16G RAM, 500G HD– http://i.dell.com/sites/content/shared-content/data-sheets/en/Documents/R410-Spec-Sheet.pdf

• FDT– Dell R510 2U Server– Dual 2.4 GHz Xeon (64 Bit), 24G RAM, 300G Main,

12TB through RAID– http://i.dell.com/sites/content/shared-content/data-sheets/en/Documents/R510-Spec-Sheet.pdf

• Switch– Dell 8024F or Dell 6048– 10G vs 1G Sites; copper ports and SFP+; Optics on a

site by site basis– http://www.dell.com/downloads/global/products/pwcnt/en/PC_6200Series_proof1.pdf – http://www.dell.com/downloads/global/products/pwcnt/en/switch-powerconnect-8024f-spec.pdf

30 – 04/20/23, © 2011 Internet2

Our Choices

31 – 04/20/23, © 2011 Internet2

DYNES Data Flow Overview

• AMPATH• Mid-Atlantic Crossroads (MAX)

– The Johns Hopkins University (JHU)• Mid Atlantic Gigapop in Philadelphia for Internet2 (MAGPI)‐ *

– Rutgers (via NJEdge)– University of Delaware

• Southern Crossroads (SOX) – Vanderbilt University

• CENIC*– California Institute of Technology (Caltech)

• MREN*– University of Michigan (via MERIT and CIC OmniPoP)

• Note: USLHCNet will also be connected to DYNES Instrument via a peering relationship with DYNES

32 – 04/20/23, © 2011 Internet2

Phase 3 Group A Members

* temp configuration of static VLANs until future group

• Mid Atlantic Gigapop in Philadelphia for Internet2 (MAGPI)‐– University of Pennsylvania

• Metropolitan Research and Education Network (MREN)– Indiana University (via I-Light and CIC OmniPoP)– University of Wisconsin Madison (via BOREAS and CIC OmniPoP)– University of Illinois at Urbana Champaign (via CIC OmniPoP)‐– The University of Chicago (via CIC OmniPoP)

• Lonestar Education And Research Network (LEARN)– Southern Methodist University (SMU)– Texas Tech University– University of Houston– Rice University– The University of Texas at Dallas– The University of Texas at Arlington

• Florida International University (Connected through FLR)33 – 04/20/23, © 2011 Internet2

Phase 3 Group B Members

• Front Range GigaPop (FRGP)– University of Colorado Boulder

• Northern Crossroads (NoX)– Boston University– Harvard University– Tufts University

• CENIC**– University of California, San Diego– University of California, Santa Cruz

• CIC OmniPoP***– The University of Iowa (via BOREAS)

• Great Plains Network (GPN)***– The University of Oklahoma (via OneNet)– The University of Nebraska Lincoln‐

34 – 04/20/23, © 2011 Internet2

Phase 3 Group C Members

** deploying own dynamic infrastructure*** static configuration based

• Group Name/Future Meetings• LHCONE• DYNES• SC11 Planning• AOB

35 – 04/20/23, © 2011 Internet2

Agenda

• SC11 is ~ 1Month Out• What’s brewing?

– LHCONE Demo• Internet2, GEANT, and end sites in the US and Europe

(UMich, CNAF initially targeted, any US end site open to get connected)

• Idea will be to show “real” applications, and use of the new network

– DYNES Demo• Booths (Inernet2, Caltech, Vanderbilt)• External Deployments (Group A and some Group B)• External to DYNES (CERN, SPRACE, HEPGrid)

36 – 04/20/23, © 2011 Internet2

It’s the Most Wonderful Time of the Year

• What’s brewing?– 100G Capabilities

• ESnet/Internet2 coast to coast 100G network• Lots of other demos using this

– SRS (SCinet Research Sandbox)• Demonstration of high speed capabilities• Lots of entries• Use of OpenFlow devices

– Speakers at the Internet2 Booth• CIOs from Campus/Fed installations• Scientists• Networking Experts

37 – 04/20/23, © 2011 Internet2

It’s the Most Wonderful Time of the Year

38 – 04/20/23, © 2011 Internet2

DYNES Demo - Topology

39 – 04/20/23, © 2011 Internet2

DYNES Demo - Participants

• Group Name/Future Meetings• LHCONE• DYNES• SC11 Planning• AOB

40 – 04/20/23, © 2011 Internet2

Agenda

• UF Lustre work?• MWT2 Upgrades?

41 – 04/20/23, © 2011 Internet2

AOB

High Energy & Nuclear Physics (HENP) SIGHigh Energy & Nuclear Physics (HENP) SIGOctober 4th 2011 – Fall Member MeetingJason Zurawski - Internet2 Research Liaison

For more information, visit http://www.internet2.edu/science

42 – 04/20/23, © 2011 Internet2