Accepted NRE demos
Data Transfer Node Service in SCinet
Location: NOC 1081 (SCinet)
“Data Transfer Nodes (DTN) have been deployed in large
data centers and HPC facilities as front end of large systems,
which have served very well for process/compute intensive
science workflows. Recently, many data intensive science
workflows are emerging, which demanded different types of
infrastructure to support different types of data intensive
science workflow. One of solutions for supporting this trend
is to provide DTN services at network exchange points. This
resource provides a platform for prototyping new data
intensive science workflows, enabling more reliable data
transfer services, providing a monitoring and measurement
point for the science workflows, and serving as test point
when data transfers fail. We will deploy a DTN as shared
infrastructure in the StarLight facility, which will
facilitate the SC community to prototype and test the data
intensive science workflows before the SC conference. We will
transition the service platform to SC17 SCinet for staging and
for the duration of conference.
Programmable Privacy-Preserving Network Measurement for
Network Usage Analysis and Troubleshooting
Location: Booth 1653 (Center for Data Intensive Science
– Open Commons Consortium)
Network measurement and monitoring are instrumental to network
operations, planning and troubleshooting. However, increasing
line rates (100+Gbps), changing measurement targets and
metrics, privacy concerns, and policy differences across
multiple R&E network domains have introduced tremendous
challenges in operating such high-speed heterogeneous
networks, understanding the traffic patterns, providing for
resource optimization, and locating and resolving network
issues. There is strong demand for a flexible,
high-performance measurement instrument that can empower
network operators to achieve the versatile objectives of
effective network management, especially effective resource
provisioning. In this demonstration, we propose AMIS: Advanced
Measurement Instrument and Services to achieve programmable,
flow-granularity and event-driven network measurement, sustain
scalable line rates, to meet evolving measurement objectives
and to derive knowledge for network advancement.
Let’s Do Full DPI of the SCinet Network and Do Some
SCIENCE
Location: NOC 1081 (SCinet)
The fundamental scientific paradigm addressed in this research
project is the application of greater network packet
visibility and packet inspection to secure computer network
systems. The greater visibility and inspection will enable
detection of advanced content-based threats that exploit
application vulnerabilities and designed to bypass traditional
security approaches such as firewalls and antivirus scanners.
Greater visibility and inspection is achieved through
identification of the application protocol (e.g. http, SMTP,
skype) and, in some cases, extraction and processing of the
information contained in the packet payload. Analysis is then
performed on the resulting DPI data to identify potentially
malicious behavior. In order to obtain visibility and inspect
the application protocol and contents DPI technologies have
been developed.
We have developed a novel piece of technology that will enable us to create layer 7 meta data up to 7k applications/protocols at a variety of line rates. This demonstration will be to show what protocols have been dissected and how this information can be created to enable interesting use-cases like situation awareness, asset discovery, network mapping, and passive identity management.
Tracking Network Events with Write Optimized Data
Structure
Location: NOC 1081 (SCinet)
The basic action of two IP addresses communicating is still a
critical part of most security investigations. Typical tools
log events and send them to a variety of traditional
databases. These databases are optimized for querying rather
than ingestion. When faced with indexing hundreds of millions
of events such indices degrade in their ability to accept
insertions at a rate that is unacceptable for network
monitoring.
Write-optimized data structures (WODS) provides a novel approach to traditional storage structures (e.g. B-trees). WODS trade minor degradations in query performance for significant gains in insertion rates, typically on the order of 10 to 100 times faster. Our Diventi project uses a write optimized B-Tree known as a Be tree to index entries in connection logs from a common network security tool (bro). In previous tests this sustained a rate of 20,000 inserts per second, while after 300,000,000 events a traditional B-Tree degraded to 100 inserts per second.
mdtmFTP: Optimized Bulk Data Transfer on Multicore
Systems
Booth 1653 (Center for Data Intensive Science – Open
Commons Consortium)
Large scale, high capacity transport of data across wide area
networks (WANs) is a critical requirement for many science
workflows. Today, this need is being addressed with
architecture and protocols designed many years ago, when
single core processors were predominant. Today, multicore has
become the norm of high-performance computing. Yet many
science communities are still relying on approaches oriented
to single core. Fermilab network research group has developed
a new high-performance data transfer tool, called mdtmFTP, to
maximize data transfer performance on multicore platforms.
mdtmFTP has several advanced features. First, mdtmFTP adopts a
pipelined I/O design. A data transfer task is carried out in a
pipelined manner across multiple cores. Dedicated I/O threads
are spawned to perform I/O operations in parallel. Second,
mdtmFTP uses a particularly designed multicore-aware data
transfer middleware (MDTM) to schedule cores for its threads,
which optimize use of underlying multicore core system. Third,
mdtmFTP implements a large virtual file mechanism to address
the lots-of-small-files (LOSF) problem. Finally, mdtmFTP
unitizes multiple optimization mechanisms – zero copy,
asynchronous I/O, batch processing, and pre-allocated buffer
pools – to improve performance. In this demo, we will use
mdtmFTP to demonstrate optimized bulk data movement over
long-distance wide area network paths. Our purpose is to show
that mdtmFTP performs better than existing data transfer
tools.
BigData Express: Toward Predictable, Schedulable and
High-Performance Data Transfer
Booth 1653 (Center for Data Intensive Science – Open
Commons Consortium)
Big Data has emerged as a driving force for scientific
discoveries. Large scientific instruments (e.g., colliders,
and telescopes) generate exponentially increasing volumes of
data. To enable scientific discovery, science data must be
collected, indexed, archived, shared, and analyzed, typically
in a widely distributed, highly collaborative manner. Data
transfer is now an essential function for science discoveries,
particularly within big data environments. Although
significant improvements have been made in the area of bulk
data transfer, the currently available data transfer tools and
services will not be able to successfully address the
high-performance and time-constraint challenges of data
transfer to support extreme-scale science applications for the
following reasons: disjoint end-to-end data transfer loops,
cross-interference between data transfers, and existing data
transfer tools and services are oblivious to user requirements
(deadline and QoS requirements). We are working on the BigData
Express project (BDE) to address these problems.
The BDE research team has released several software packages: (1) BDE WebPortal. A web portal that allows users to access BigData Express data transfer services; (2) BDE scheduler. It schedules and orchestrates resources at BDE sites to support high-performance data transfer. And (3) BDE AmoebaNet. A SDN-enabled network service that provide “Application-aware” network. It allows application to program network at run-time for optimum performance. These software packages can be deployed to support three types of data transfer: real-time data transfer, deadline-bound data transfer, and best-effort data transfer.
In this demo, we use BDE software to demonstrate bulk data movement over wide area networks. Our goal is to demonstrate that BDE can successfully address the high-performance and time-constraint challenges of data transfer to support extreme-scale science applications.
Deep Network Visibility Using Multidimensional Data
Analysis
Location: NOC 1081 (SCinet)
“Reservoir Labs proposes to demonstrate a usable and
scalable network security workflow based on ENSIGN [1], a
high-performance data analytics tool involving tensor
decompositions. The enhanced workflow provided by ENSIGN
assists in identifying attackers who craft their actions to
subvert signature-based detection methods and automates much
of the labor intensive forensic process of connecting isolated
incidents into a coherent attack profile. This approach
complements traditional workflows that focus on highlighting
individual suspicious activities. ENSIGN uses advanced tensor
decomposition algorithms to decompose rich network data with
multiple metadata attributes into components that capture
network patterns spanning the entire multidimensional data
space. This enables easier identification of anomalies and
simpler analysis of large complex patterns.
Reservoir Labs proposes to apply ENSIGN over the network security logs available through the SCinet network stack to provide deep visibility into network behaviors/trends including but not limited to unauthorized traffic patterns, temporal patterns such as beaconing, security attacks, and patterns of authorized and unauthorized services.
Highly Distributed Science DMZs
Booth 1653 (Center for Data Intensive Science – Open
Commons Consortium)
These demonstrations will showcase the utility of using
several specialized software stacks to scale science DMZs
across multiple highly distributed sites. Software Defined
International WANs (SD-WANs) and International Software
Defined Exchange Interoperability This demonstration will show
how traditional exchange services, architecture, and
technologies are being radically transformed by the
virtualization of resources at all levels, enabling much more
flexible, dynamic, and programmable communication
capabilities.
IIRNC Software Defined Exchange (SDX) Services Integrated
with
100 GBPS Data Transfer Nodes (DTNS) for Petascale Science
Location: Booth 1653 (Center for Data Intensive Science
– Open Commons Consortium)
This demonstration illustrates high performance transport
services for petascale science over WANs through SDN enabled
DTN’s, which are being designed to optimize capabilities
for supporting large scale, high capacity, high performance,
reliable and sustained individual data streams.
Jupyter For Integrating Science Workflows And Network
Orchestration
Location: Booth 1653 (Center for Data Intensive Science
– Open Commons Consortium)
This demonstration will showcase the utility of using Jupyter
for integrating scientific workflows with network
orchestration techniques required for data intensive science.
OCC’S Environmental Data Commons At 100 GBPS
Location: Booth 1653 (Center for Data Intensive Science
– Open Commons Consortium)
“We will demonstrate how 100 Gbps networks can be used
to interoperate storage and compute resources located in two
geographically distributed data centers so that a data commons
can span multiple data centers and how two data commons
connected by a 100 Gbps network can peer with each other.
Applying P4 To Supporting Data Intensive Science Workflows
On Large Scale Networks
Location: Booth 1653 (Center for Data Intensive Science
– Open Commons Consortium)
These demonstrations will show how P4 can be used to support
large scale data intensive science workflows on high capacity
high performance WANs and LANs.
International Wan High Performance Data Transfer Services
Integrated With 100 Gbps Data Transfer Nodes For Petascale
Science (PETATRANS)
Location: Booth 1653 (Center for Data Intensive Science
– Open Commons Consortium)
The PetaTrans 100 Gbps Data Transfer Node (DTN) research
project is directed at improving large scale WAN services for
high performance, long duration, large capacity single data
flows. iCAIR is designing, developing, and experimenting with
multiple designs and configurations for 100 Gbps Data Transfer
Nodes (DTNs) over 100 Gbps Wide Area Networks (WANs),
especially trans-oceanic WANs, PetaTrans – high performance
transport for petascale science, including demonstrations at
SC17.
Large Scale Optimized Wan Data Transport For Geophysical
Sciences
Location: Booth 1653 (Center for Data Intensive Science
– Open Commons Consortium)
This demonstration will showcase a capability for optimal
transfer of large scale geophysical data files across WANs.
This capability is enabled by an innovated method for
integrating data selection and transport using orchestrated
workflows and network transport.
University of Texas at Dallas – Intel – Aspera with SC2017
Options
Location: Booth 995 (University of Texas at Dallas)
The University of Texas at Dallas (UTD) is planning to
demonstrate a high-speed Optical Software Defined Network
(SDN) at Supercomputing 2017 in Denver. To demonstrate the
full performance of the optical SDN we need to push relatively
large amounts of data over the network and to showcase low
latency networking for a variety of special experiments.
Recent involvement with the PRP Pacific Research Platform and
the “in development” NRP National Research Platform first
meeting, led to the realization that Data Transfer Node
technology will be required for participation in the fast
developing platform.
An Adaptive Network Testbed based on SDN-IP
Location: Booth 973 (NICT)
In this demonstration, we will show the conceptual design of
SDN-IP testbed.
Next Generation SDN Architectures and Applications with
Gigabit/sec to Terabit/sec Flows and Real-Time Analytics for Data
Intensive Sciences
Location: Booth 663 (California Institute of Technology /
CACR)
This submission is a brief summary of the innovative on-floor
and wide area network configuration, topology, and SDN methods
using multiple controllers and state of the art data transfer
methods, optical network and server technologies, as well as
many of the demonstrations being supported, which are
described more fully in separate NRE submissions associated
with the Caltech booth.
Network Control and Multi-domain Interconnection
Programming for Next-Generation Data-Intensive Sciences
Location: Booth 1653 (Center for Data Intensive Science
– Open Commons Consortium)
The next generation of globally distributed science programs
face unprecedented challenges in the design and implementation
of their networking infrastructures to achieve efficient,
flexible, secure, global data transfers for scientific
workflows. These networking infrastructures must be realized
with highly easy-to-program control planes, such as using the
emerging software-defined networking (SDN) techniques, to
allow extensibility, composability, and reactivity, to handle
ever evolving requirements, such as integrating a wide range
of functionalities including resource allocation,
measurements, security, policy enforcement, fault tolerance,
scaling, among others. At the same time, as such science
programs tend to span multiple autonomous organizations, they
must respect the privacy and policies of individual
organizations. This demo shows novel control-plane programming
primitives and abstractions toward realizing highly
programmable control and interconnection of such data
intensive science networks.
SENSE:
SDN for End-to-end Networked Science at the Exascale
Location: Booth 663 (California Institute of Technology /
CACR)
Distributed application workflows with big-data requirements
depends on predictable network behavior to work efficiently.
The SENSE project vision is to enable National Labs and
Universities to request and provision end-to-end intelligent
network services for their application workflows, leveraging
SDN capabilities . Our approach is to design network
abstractions and an operating framework to allow host, Science
DMZ / LAN, and WAN auto-configuration across domains, based on
infrastructure policy constraints designed to meet end-to-end
service requirements.
Demonstrations of 400Gbps Disk-to-Disk: WAN File Transfers
using iWARP and NVMe Disk
Location: Booth 1653 (Center for Data Intensive Science
– Open Commons Consortium)
NASA requires the processing and exchange of ever increasing
vast amounts of scientific data, so NASA networks must scale
up to ever increasing speeds, with 100 Gigabit per second
(Gbps) networks being the current challenge. However it is not
sufficient to simply have 100 Gbps network pipes, since normal
data transfer rates would not even fill a 1 Gbps pipe. The
NASA Goddard High End Computer Networking (HECN) team will
demonstrate systems and techniques to achieve near 400G
line-rate disk-to-disk data transfers between a high
performance NVMe Server at SC17 to or from a pair of high
performance NVMe servers across two national wide area 2x100G
network paths, by utilizing iWARP to transfer the data between
the servers’ NVMe drives.
Dynamic Distributed Data Processing
Location: Booth 1653 (Center for Data Intensive Science
– Open Commons Consortium)
This demonstration will show dynamic arrangement of high
performance, widely distributed processing of large volumes of
data across a set of compute and network resources organized
in response to changing application demands. A dynamic,
distributed processing pipeline will be demonstrated from SC17
to the Naval Research Laboratory in Washington, DC, and back
to SC17. A software-controlled network will be assembled using
a number of switches and three SCinet 100Gbs connections from
DC to Denver. We will show dynamic deployment of complex
production quality (uncompressed), live, 4K video processing
workflows, with rapid redeployment to provide for different
needs and leverage available nationally distributed resources
– relevant to emerging defense and intelligence
distributed data processing challenges. Our remote I/O
strategy including 100G RDMA extension of the gstreamer
framework allows data processing as the data stream arrives
rather than waiting for bulk transfers.
Collaborating organizations: Naval Research Laboratory,
University of Missouri, International Center for Advanced
Internet Research, Northwestern University (iCAIR), Open
Commons Consortium (OCC), Laboratory for Advanced Computing
University of Chicago (LAC), Defense Research and Engineering
Network (DREN), Energy Science Network (ESnet), Mid-Atlantic
Crossroads (MAX), StarLight International/National
Communications Exchange Facility Consortium, Metropolitan
Research and Education Network (MREN), SCinet, and the Large
Scale Networking Coordinating Group of the National
Information Technology Research and Development (NITRD)
program (Big Data Testbed).
Calibers: A Bandwidth Calendaring Paradigm for Science
Workflows
Location: NOC 1081 (SCinet)
Science workflows require large data transfer between
distributed instrument facilities, storage, and computing
resources. To ensure that these resources are maximally
utilized, R&E networks interconnecting these resources
must ensure there is no bottleneck. However, running the
network at high utilization results in congestion that results
in poor end-to-end TCP throughput performance and/or fairness.
This in turn leads to un-predictability in the transfer time
leading to poor utilization of distributed resources.
Calibers aims to advance state-of-the-art in traffic engineering by leveraging SDN-based network architecture and flow pacing algorithms to provide predictable data transfers performance and higher network utilization. Calibers highlights how by intelligently and dynamically shaping flows, we can maximize the number of flows that achieve deadline while improving network resource utilization.
Calibers is able to (1) demonstrate workflow intent and network controller interaction to continually assess bandwidth allocation per flow, and (2) implement an optimization algorithm that actively manages calendaring of flow deadlines. These allow actions to perform dynamic rate shaping at the edge, allowing better network utilization.
Dynamic CEPH Provisioning for Authentication Federations
Booth 471 (Michigan State University / University of
Michigan)
Scientific collaboration on large datasets can be a
challenging problem which diverts time from core research
goals. A major motivation for the OSiRIS project is reducing
the barriers for access to and sharing of research data
storage, and a core part of achieving that goal is leveraging
authentication federations that many institutions already
participate in. We will demonstrate that it is possible to
link these federated credentials to the Ceph storage platform
and make the enrollment and provisioning process for new users
relatively easy – so easy that anyone walking by our
booth can get started, move data into our system, and work
directly with that data from their own systems
ATLAS Machine Learning on the Pacific Research Platform
Location: Booth 663 (California Institute of Technology /
CACR)
The basic theme of the demo is to show the ability to perform
ATLAS machine learning based simulation and analysis on a
distributed platform comprised of shared GPU machines
connected by high speed networks that allow transparent
delivery of data through a distributed Ceph file system. The
Pacific Research Platform provides this infrastructure. The
lessons learned will inform the development of models for the
next generation of ATLAS tools targeted for the High
Luminosity Upgrade of the LHC.
HEPCloud Distributed Caching Demo
Location: Booth 663 (California Institute of Technology /
CACR)
LHC production jobs targeted in this exercise consist of a
sequence of processes, with all but one of them doing either
no read or read from worker node. Only one process reads data
both locally and remotely using XrootD . The remote read has
the characteristic that the data is not read fully
sequentially once, but with random skip through, and
potentially many times over by all concurrent jobs. The
baseline setup we want to consider is when the remotely
accessed dataset is on the storage of one of remote computer
center and data is brought on demand next to the compute.
Improved Monitoring and Performance in the Network Data
Plane for the LHC Grid
Location: Booth 663 (California Institute of Technology /
CACR)
In complex networks, such as the USCMS Tier1 facility at
Fermilab and the Caltech Tier2 cluster, which support
large-scale collaborative science projects, such as the Large
Hadron Collider’s (LHC) Compact Muon Solenoid (CMS)
experiment, detecting connectivity and performance problems is
a very challenging task.
One primary objective of this investigation is to design techniques, develop monitoring systems at application level and also at the switch level which builds basic infrastructure to detect, troubleshoot and resolve pairwise connectivity and performance problems. The other main objective is improving the network performance between LHC sites.
High Throughput Flows Between North and South Hemispheres
Using Kytos, an Innovative SDN Platform
Location: Booth 1491 (Sao Paulo State University)
Data-intensive, globally distributed science programs still
face problems related to the need of handling huge dataset
transfers between endpoints widely distributed on the globe.
Orchestration of network resources to efficiently use the
network infrastructure, as well as diagnosing network
problems, are still big challenges that need to be addressed.
Most administrators are still stuck with traditional network
tools for daily administrative and debugging tasks and
tethered to proprietary vendor solutions. SDN is a promising
technology that aims to open interfaces of proprietary
networking devices to improve orchestration and to enable
innovation. In this demo we will exercise and stress multiple
100 Gbps WAN links between the United States and South America
using Kytos, a new, innovative open source SDN platform. This
demo will also exercise the OpenWave project, an experimental
100G alien wave that comprises a total span of approximately
10,000km.
Disk-to-disk Data Transfer at 100G
Location: Booth 1653 (Center for Data Intensive Science
– Open Commons Consortium)
Compute Canada and regional partners are deploying and
managing advanced research computing (ARC) resources
consisting of multi-petabytes storage systems and well over
100,000 compute cores. These resources, located across four
sites in Canada, are used by Canadian researchers from any
region in the country.
Multi-purpose GP-GPU Cluster for Machine Learning Fast
Prototyping, Medium Scale Training and Knowledge
Dissemination
Location: Booth 663 (California Institute of Technology /
CACR)
During the recent decades of evolution of machine learning we
have witnessed the advent of deep learning in tackling many
computer vision tasks and other pattern recognition. This
revival has been due to three factors : the discovery of
several experimental method to train large models, large
labelled datasets have been made available and fast
development of dedicated computing hardware. Deep learning is
“now everywhere”, in many everyday applications. There is a
general trend of adoption of modern machine learning and deep
learning method within the field of High Energy Physics. Some
methods are possible avenue to tackle some of the
data-hyper-intensive computation that will be required in the
next decades. While there is a large amount of computing
resource available and used by the community, these resources
are not necessarily usable for fast prototyping and quick
turn-around during exploration of new techniques. We propose a
multi-GPU cluster architecture and specific multi-purpose
usage to improve the return on investment and accelerate the
science.
PRP Multi-Institution Hyper-Converged ScienceDMZ
Location: Booth 525 (SDSC)
The goal is
to bring GIFEE “Google’s infrastructure for
everyone else” to ScienceDMZs and HPC. The science
running on computers is becoming more and more collaborative
and is driving new requirements for federation and
orchestration of HPC assets. Kubernetes has emerged as the
container orchestration engine of choice for many cloud
providers. Google, Amazon, Microsoft and many others already
support Kubernetes. This “pattern” has reached its tipping
point out in the wild but not in HPC and ScienceDMZs. Our
testbed on the NSF funded Pacific Research Platform is well
positioned to support experiments and research that stress the
limits of distance and performance while maintaining security
and domain isolation needed to enable multi institution
collaboration.
CMS.VR
Location: Booth 663 (California Institute of Technology /
CACR)
CMS.VR provides a virtual reality based interactive experience
of high-energy proton-proton collisions in the CMS detector at
the CERN Large Hadron Collider. Using the HTC Vive VR headset
and hand controllers, CMS.VR puts the user inside the CMS
collision hall at Point 5 of the LHC, with the full 3D
geometry of the CMS detector. The user can display actual CMS
collision events with reconstructed particle tracks,
electrons, and muons rendered as tubes, and energy deposits in
the hadron and electromagnetic calorimeters rendered as
rectangular prisms. The user can move around inside the
collision and interrogate the reconstructed physics objects to
extract their detailed properties. An LSTM network that
performs event classification is visualized as well.
MMCFTP’s Data Transfer Experiment Using Three 100Gbps
Lines Between Japan and USA
Location: Booth 973 (NICT)
Massively Multi-Connection File Transfer Protocol (MMCFTP) [1]
is a new file transfer protocol designed for big data sharing
of advanced research projects and data intensive science.
MMCFTP achieved 150 Gbps data transfer speed between Tokyo and
Salt Lake using two 100 Gbps lines between Japan and USA in
SC16. A new 100 Gbps route via Singapore will be available
October this year. In SC17, we will try MMCFTP’s data
transfers using three 100 Gbps lines between Japan and USA.
The target speed will be 210 Gbps, up to 240 Gbps. This speed
will become a new record of intercontinental class long
distance network data transfer rate using a single host pair.
Global Virtualization Services (GVS) for Distributed
Applications and Advanced Network Services
Location: Booth 267 (GEANT)
The Global Virtualization Services allows networked
applications to dynamically acquire cyber-infrastructure
objects such as computational platforms, switching and
forwarding elements, storage assets, and/or other custom
components or instruments, along with transport circuits
interconnecting these components to create customized high
performance virtual networks distributed across a global
footprint. This demonstration will show high performance
virtualized networked environments being set up dynamically
across facilities maintained in North America by Ciena
Research, and facilities maintained in Europe by NORDUnet and
the GEANT Network. The demonstration will show how such
virtualized services can be established easily and rapidly,
and can be flexibly re-configured as the distributed
application requirements change, and how these virtual
environments perform at real hardware levels despite their
“virtual” service model.
The Global Virtualization Services
Location: Booth 267 (GEANT)
(GVS) are based upon a Generic Virtualization Model (GVM)
developed in Europe under the GEANT Project and European
Commission funding. The GVM is an open architecture that
defines a standard lifecycle for virtual objects, their
attributes, and a means of linking virtual objects to one
another to create sophisticated high performance and highly
dynamic networked environments. This GVS concept allows new or
experimental services and applications to be deployed easily
on a global scale in insulated and isolated virtual networks
allowing these emerging services to co-exist safely with other
applications and services and to evolve at scale and in place
into new mature production services.
Corsa Managed Filtering Capability
Location: NOC 1081 (SCinet)
This SCinet NRE demonstration showcases a managed packet
filtering capability that enables network security without
compromising network performance. An evolution in security
architecture is achieved through the separation and
simplification of security functions found in today’s
traditional, all-inclusive network security solutions. This
function-based architecture will be used to demonstrate 10G
and 100G in-line filtering with constant packet rate
forwarding performance at 150 Mpps / 100G. In addition to
filtering and rate-limiting the platform performs protocol
validation, offers traffic statistics for every rule, and
enables other security functions. This demonstration is
accomplished by deploying Corsa NSE7000 devices into the main
distribution framework of the SCinet architecture.
Corsa Network Hardware Virtualization
Location: NOC 1081 (SCinet)
This SCinet NRE demonstration will provide network researchers
programmatic control over their own isolated, OpenFlow-based
switch virtual forwarding contexts (VFC). In parallel, this
demonstration will provide the network operators with the
tools and
confidence needed to allow network researchers this level of
programmatic forwarding control. This will be accomplished
using a SCinet infrastructure built on the Corsa DP2000
product family and its virtualization features. In this
demonstration the network operator will be the SCinet Network
Operations Center (NOC) who will configure and monitor these
VFCs at the request of the network researchers. We will
demonstrate and enforce the boundaries between the SCinet NOC
administrative domain and the network researchers (i.e.
user’s) administrative domain through the Corsa DP2000
virtualization features.
PerfSONAR and SDNTrace Hop-by-Hop Network Troubleshooting
for Flows of Interest
Location: Booth 1635 (University of Utah)
End-to-End Network troubleshooting requires the visibility on
a hop-by-hop basis regardless of layer 3 and above protocol
stack. End-to-End troubleshooting should also be able to look
at “flows of interest” in “virtual paths”. This demo is the
start of exploration of using SDNTrace, perfSONAR, and other
tools to look at these “virtual paths” on a network
hop-by-hop. The exploration will validate the “virtual path”
by starting a client and dynamically placing the tools in the
“virtual path”.
The SC conference series has traditionally been home to cutting-edge developments in high-performance networking, alongside those in high-performance computing, storage, and analytics. SCinet is soliciting proposals from research and industry participants that displayed new or innovative demonstrations in network testbeds, emerging network hardware, protocols, and advanced network intensive scientific applications.
DEMO TOPICS
Topics for this year’s Network Research Exhibition
demos and experiments include (but are not limited to):
- Software-defined networking
- Novel network architecture
- Switching and routing
- Alternative data transfer protocols
- Network monitoring and management, and network control
- Network Security/encryption and resilience
- Open clouds and storage area networks
- HPC-related usage of GENI Racks
A selection of NRE demonstrations will be invited to be on a panel in a half-day SC17 Workshop titled, “Innovating the Network for Data Intensive Science” taking place on Sunday, November 12, 2017.