Contact Us
  • This field is for validation purposes and should be left unchanged.

Advanced Digital Network Technologies and Middleware Services

Please Note that a Letter of Intent is due Tuesday, September 08, 2015 5:00pm ETProgram Area Overview Office of Advanced Scientific Computing Research The primary mission of the Advanced Scientific Computing Research (ASCR) program is to discover, develop, and deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to the Department of Energy. A particular challenge of this program is fulfilling the science potential of emerging computing systems and other novel computing architectures, which will require numerous significant modifications to today’s tools and techniques to deliver on the promise of exascale science. To accomplish this mission, ASCR funds research at public and private institutions and at DOE laboratories to foster and support fundamental research in applied mathematics, computer science, and highperformance networks. In addition, ASCR supports multidisciplinary science activities under a computational science partnership program involving technical programs within the Office of Science and throughout the Department of Energy.  ASCR also operates high_performance computing (HPC) centers and related facilities, and maintains a highspeed network infrastructure (ESnet) at Lawrence Berkeley National Laboratory (LBNL) to support computational science research activities. The HPC facilities include the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory (ORNL), the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory (ANL), and the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory (LBNL).  ASCR supports research on applied computational sciences in the following areas:  –        Applied and Computational Mathematics _ to develop the mathematical algorithms, tools, and libraries to model complex physical and biological systems.  –        High_performance Computing Science _ to develop scalable systems software and programming models, and to enable computational scientists to effectively utilize petascale computers to advance science in areas important to the DOE mission.  –        Distributed Network Environment _ to develop integrated software tools and advanced network services to enable large_scale scientific collaboration and make effective use of distributed computing and science facilities in support of the DOE science mission.  –        Applied Computational Sciences Partnership _ to achieve breakthroughs in scientific advances via computer simulation technologies that are impossible without interdisciplinary effort.   For additional information regarding the Office of Advanced Scientific Computing Research priorities, click here. TOPIC 1: Advanced Digital Network Technologies and Middleware Services  Maximum Phase I Award Amount:  $150,000Maximum Phase II Award Amount:  $1,000,000Accepting SBIR Phase I Applications:  YES Accepting SBIR Fast_Track Applications:  YES Accepting STTR Phase I Applications:  YES Accepting STTR Fast_Track Applications:  YES Advanced digital network technologies and middleware services play a significant role in the way DOE scientists communicate with peers and collect/process data.  Optical networks operating at rates of more than 100 Gbps support the transfer of petabytes of data per day.  These networks also peer with commercial networks allowing scientists remote access to instruments and facilities while also allowing citizens access to the data and knowledge that has been produced.  Improvements in the tools and services used to manage and operate this infrastructure are needed to meet the needs of both network operators and users. Scientific instruments and supercomputer facilities generate, consume, process, and store both raw and analyzed data enabling the discovery of new knowledge. Efforts are underway to scale these computers to support extreme_scale computationally intensive science applications and to deal with increasing volumes and velocities of experimental and observational data.  Optical components play a role at all in these systems, ranging from chip_to_chip communications all the way up to wide area networks.  Accelerating the development of optical components to meet the data movement needs of unique scientific instruments and computing facilities is a major challenge that this topic addresses.  This topic also addresses the higher level middleware services and tools that are needed to turn raw data into actionable knowledge This topic solicits proposals that address issues related to developing tools and services that report performance problems in a manner suitable for network engineers or application users, developing optical components that can be used to build digital networks or computer interconnect fabrics, or hardening middleware tools and services that deal with Big Data.    a. PerfSONAR Based Network Monitoring Tools and Services perfSONAR (https://www.perfsonar.net) is a network monitoring architecture developed by the international Research and Education Network (REN) community for developing multi_domain measurement and monitoring services.  Using this architecture engineers can deploy tools and services that collect and store unique data values in managed data archives. Other tools and services can then leverage this archived data to analyze and display it in a manner that makes sense to the network operator or end user.  As of May 2015, there are over 1400 deployed perfSONAR measurement boxes in the worldwide REN community.     Grant applications are sought to improve the scalability, usability, and deployability of perfSONAR based tools and services. Outstanding issues include, but are not limited to: 1) Scaling: The perfSONAR community expects to scale from thousands to hundreds_of_thousands of deployed measurement boxes. Tools and services are required that can manage dozens to hundreds of measurement boxes in a cost effective manner. 2) Hardware: low_cost hardware solutions that can operate over regional and transcontinental distances at 10 Gbps and beyond are needed.  3) Unique Data Collection: Software tools are needed that collect and store unique data measurements under the supervision of the perfSONAR management service layer.  4) Data Analysis tools: The collected data needs to be analyzed to identify both trend information and performance anomalies.  5) Data Visualization services: Network operators and network users need intuitive displays that show performance or operational information tailored to meet the individuals? needs. Questions ? Contact: Richard Carlson, richard.carlson@science.doe.gov b. Optical/Photonic Computing and Networking Components  Information processing requirements and capabilities of high performance computing (HPC) and network systems have grown dramatically and the need for increased data transmission rates and bandwidth coupled with lower cost and energy consumption has become a limiting factor in the performance of such systems. Ready commercially available photonic, silicon photonic, and quantum computing componentry and tools could provide an effective and scalable solutions to current and future HPC and network communication challenges. Optical, photonic or optoelectronic components and technologies have revolutionized all areas of digital communications. Increasing overall systems parallelism, concurrency, and need for energy efficiency have emerged as some of the key challenges. Optical and Photonic devices and componentry could offer the potential for creating system_wide interconnection and computing networks and systems with extremely high bandwidth and energy efficiency. Therefore, commercially available optical, photonic or optoelectronic components and tools could provide an effective and scalable solution to building and operating future extreme_scale computers.   Grant applications are sought to develop and commercialize tools, devices, services, fabrication capabilities, and turnkeys that will accelerate rapid adoption of these technologies and address the emerging need for massive deployment of optical and photonic network and computing infrastructures.  Issues include but are not limited to: tools that decrease the cost of terminating or splicing optical cables; ultrafast solid state switching fabrics; photonic devices; optoelectronic devices; and interconnects; reconfigurable optical/photonic converters or encoders; components to test optical signal quality; computing components; or components that operate at 100+ Gigabit per sec line rates.    Questions – Contact: Robinson Pino, robinson.pino@science.doe.gov c. BigData Technologies  This sub_topic focuses on complex data management technologies that go beyond traditional relational database management systems.  The efficient and cost_effective technologies to collect, manage, and analyze distributed BigData is a challenge to many organizations including the scientific community. Database management technologies based traditional relational and hierarchical database systems are proving to be inadequate to deal with BigData complexities (volume, variety, veracity, and velocity), especially when applied to BigData systems in science and engineering.   While the primary focus is on the development of tools and services to support complex scientific and engineering data, all sources of complex data are in_scope for this sub_topic. The focus of this sub_topic is on the development of cost/time_effective commercial grade technologies in the following categories:  ?        BigData management software_enabling technologies ? this includes but are not limited to the development of software tools, algorithms, and turnkey solutions for complex data management such as NOSQL/graph databased to deal with unstructured data in new ways; visualization and data processing tools for unstructured multi_dimensional data, robust tools to test, validate, and  remove defects in large unstructured data sets; tools to manage and analyze hybrid structured and unstructured data; BigData security and privacy solutions; BigData as a service systems; high_speed data hardware/software data encryption and reduction systems; and online management and analysis of streaming and text data from instruments or embedded systems ?        BigData Network_aware middleware technologies ? This includes high_speed network and middleware technologies that enable the collection, archiving, and movement of massive amounts of data within datacenters, data cloud systems, and over Wide Area Networks (WANS). This may include but are not limited to hardware subsystems such high_performance data servers and data transfer nodes, high_speed storage area network (SAN) technologies; network_optimized data cloud services such as virtual storage technologies; and other distributed BigData solutions Grant applications must ensure that proposed work goes beyond traditional data management system technologies by focusing one or more defining characteristics of BigData (Volume, velocity, veracity, and variety Questions ? Contact: Thomas Ndousse, thomas.ndousse_fetter@science.doe.gov d. OtherIn addition to the specific subtopics listed above, the Department invites grant applications in other areas that fall within the scope of the topic description above.       Questions ? Contact: Richard Carlson, richard.carlson@science.doe.gov

Contact Us
  • This field is for validation purposes and should be left unchanged.