Contact Us
  • This field is for validation purposes and should be left unchanged.

Nuclear Physics Software and Data Management

Please Note that a Letter of Intent is due Tuesday, September 08, 2015 5:00pm ET Program Area Overview Office of Nuclear Physics   Nuclear physics (NP) research seeks to understand the structure and interactions of atomic nuclei and the fundamental forces and particles of nature as manifested in nuclear matter.  Nuclear processes are responsible for the nature and abundance of all matter, which in turn determines the essential physical characteristics of the universe.  The primary mission of the Nuclear Physics (NP) program is to develop and support the scientists, techniques, and facilities that are needed for basic nuclear physics research and isotope development and production.  Attendant upon this core mission are responsibilities to enlarge and diversify the Nation’s pool of technically trained talent and to facilitate transfer of technology and knowledge to support the Nation’s economic base.  Nuclear physics research is carried out at national laboratories and accelerator facilities, and at universities.  The Continuous Electron Beam Accelerator Facility (CEBAF) at the Thomas Jefferson National Accelerator Facility (TJNAF) allows detailed studies of how quarks and gluons bind together to make protons and neutrons.  In an upgrade currently underway, the CEBAF electron beam energy will be doubled from 6 to 12 GeV.  The Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) is forming new states of matter, which have not existed since the first moments after the birth of the Universe; a beam luminosity upgrade is currently underway. NP is supporting the development of a future Facility for Rare Isotope Beams (FRIB) currently under construction at Michigan State University.  The NP community is also exploring opportunities with a proposed electron_ion collider.   The NP program also supports research and facility operations directed toward understanding the properties of nuclei at their limits of stability, and of the fundamental properties of nucleons and neutrinos.  This research is made possible with the Argonne Tandem Linac Accelerator System (ATLAS) at Argonne National Laboratory (ANL) which provides stable and radioactive beams as well as a variety of species and energies; a local program of basic and applied research at the 88_Inch Cyclotron of the Lawrence Berkeley National Laboratory (LBNL); the operations of accelerators for in_house research programs at two universities (Texas A&M University and the Triangle Universities Nuclear Laboratory (TUNL) at Duke University), which provide unique instrumentation with a special emphasis on the training of students; non_accelerator experiments, such as large standalone detectors and observatories for rare events.  Of interest is R&D related to future experiments in fundamental symmetries such as neutrinoless double_beta decay experiments and measurement of the electric dipole moment of the neutron, where extremely low background and low count rate particle detections are essential. Another area of R&D is rare isotope beam capabilities, which could lead to a set of accelerator technologies and instrumentation developments targeted to explore the limits of nuclear existence.  By producing and studying highly unstable nuclei that are now formed only in stars, scientists could better understand stellar evolution and the origin of the elements. Our ability to continue making a scientific impact on the general community relies heavily on the availability of cutting edge technology and advances in detector instrumentation, electronics, software, accelerator design, and isotope production.  The technical topics that follow describe research and development opportunities in the equipment, techniques, and facilities needed to conduct and advance nuclear physics research at existing and future facilities.  For additional information regarding the Office of Nuclear Physics priorities, click here.  TOPIC 21:  Nuclear Physics Software and Data Management   Maximum Phase I Award Amount:  $150,000 Maximum Phase II Award Amount:  $1,000,000 Accepting SBIR Phase I Applications:  YES Accepting SBIR Fast_Track Applications:  YES Accepting STTR Phase I Applications:  YES Accepting STTR Fast_Track Applications:  YES  Large scale data storage and processing systems are needed to store, access, retrieve, distribute, and process data from experiments conducted at large facilities, such as Brookhaven National Laboratory?s Relativistic Heavy Ion Collider (RHIC) and the Thomas Jefferson National Accelerator Facility (TJNAF).  In addition, data acquisition for the Facility for Rare Isotope Beams (FRIB) will require unprecedented speed and flexibility in collecting data from the new flash ADC based detectors.  Experiments at such facilities are extremely complex, involving thousands of detector elements that produce raw experimental data  at rates in excess of a GB/sec, resulting in the annual production of data sets of hundred Terabyte (100 TB) to Petabyte (PB) scale.  Data sets of many 10s to 100s of TB are currently distributed to institutions worldwide for analysis, and with the increasing data generation rates seen at RHIC, PB scale datasets are anticipated. Research on the management of such large data sets, and on high speed, distributed data acquisition will be required to support these large scale nuclear physics experiments.  All grant applications should explicitly show relevance to the DOE nuclear physics program. In addition, applications should be informed by prior practice in nuclear physics applications, commercially available products, and emerging technologies. We note that a proposal that advocates incremental improvements or moderate levels of innovation may nonetheless have an enormous impact in the right context. Such proposals must of course make their case convincingly, as they may otherwise be considered non_responsive. Grant applications are sought only in the following subtopics:  a. Large Scale Data Processing and Distribution  A trend in nuclear physics is to maximize the availability of distributed storage and computing resources by constructing end_to_end data handling and distribution systems, using web services or data grid infrastructure software (such as Globus, Condor or xrootd), with the aim of achieving fast data processing and/or increased data availability across many compute facilities.   Grant applications are sought for (1) hardware and/or software techniques to improve the effectiveness and reduce the costs of storing, retrieving, and moving such large volumes of data, including but not limited to automated data replication coupled with application_level knowledge of data usage, data transfers to Tier 2 and Tier 3 centers from multiple data provenance, to achieve the lowest wait_time or fastest data processing and maximal coordination (2) effective new approaches to data mining or data analysis through data discovery or restructuring (examples of such approaches might include fast information retrieval through advanced MetaData searches or in_site data reduction, and repacking for remote analysis and data access); (3) new tools for co_scheduling compute and storage resources for dataintensive high performance computing tasks, such as user analyses in which repeated passes over large datasets are needed, requiring fast turnaround times; and (4) distributed authorization and identity management systems, enabling single sign_on access to data distributed across many sites.   Proposed infrastructure software solutions should consider and address the advantages of closely integrating relevant components of Grid middleware, such as are contained in the software stack of the Open Science Grid, as the foundation used by nuclear physics and other science communities.  Applications that propose data distribution and processing projects are encouraged to determine the relevance of and possible future migration strategies into existing infrastructures, and to consider integrated solutions and designs with capacities that would seamlessly include Grid and Clouds computing resources, or help to transparently bridge between the two worlds Clouds and Grids.  Questions ? Contact: Manouchehr.farkhondeh@science.doe.gov. Also can contact the NP Topic Associate (TA) listed at the beginning of the References section for this topic.  b. Software_Driven Network Architectures for Data Acquisition  Modern data acquisition systems are becoming more heterogeneous and distributed. This presents several new challenges, and existing Supervisory Control And Data Acquisition (SCADA) systems may no longer be applicable to these new large, dynamic, loosely_coupled systems with increased reliability requirements and high data rates. The building blocks of data acquisition system are digitizers, either flash digitizers, or integrating digitizers of time, pulse height or charge.  These elements are required to convert detector signals into a digital form in real time. The data from each detector element is labeled with a precisely synchronized time, and transmitted to buffers. The total charge, the number of coincident elements, or other information summaries are used to determine whether something interesting has happened, which constitutes a trigger.  If justified by the trigger, the data from these elements are assembled into a timecorrelated event for later analysis, a process called Event Building.  At present, these elements are typically connected by buses (VME, cPCI), custom interconnects, or serial connections (USB).    For physics experiments at facilities such as RHIC, there is an increasing need for real_time decision making processing such as error correction and recovery, as well as real_time quality control.  Data collection and device control would benefit greatly from scalable and versatile control systems.  As the number of channels increases, control systems based on EPICS and archiver systems cannot provide the truly distributed and scalable infrastructures needed by remote control rooms. One might instead consider a message queueing paradigm as a possible architecture for meta_data collection.  In certain FRIB experiments, low event rates of 1 to 10 kevents/sec are anticipated, with dense data streams from FADC_based detector systems.  The large latencies possible in highly buffered flash ADC architectures can be used to advantage in the design of the architecture. An interesting possibility regarding the next generation of data acquisition systems is that they may ultimately be composed of separate ADCs for each detector element, connected by commercial network or serial technology.  Implementation of this distributed data acquisition over commercially available network technologies such as Ethernet or Infiniband will require additional development. One may for example develop a software architecture for a system that works efficiently given available network bandwidth and latencies.  Desirable features of this architecture will be to (1) synchronize time across all channels to sufficient precision to support  Flash Analog_to_Digital Converter (FADC) clock synchronization (perhaps 10nsec or better), and to support trigger formation and event building (at least 100nsec or better); (2) determine a global trigger from information transmitted by the individual components; (3) notify the elements of a successful trigger, in order to locally store the current information; (4) collect event data from the individual elements to be assembled into events; and (5) develop software tools to validate the synchronization, triggering, and event building during normal operation.  Time synchronization is critical to the success of this architecture, as is the concurrent validation of synchronization. This architecture and its implementation could form the basis of a standard for next_generation data acquisition in nuclear physics, particularly at FRIB.  It could be made available for integrating custom front end electronics, and could also be integrated with various ADC and TDC components to form complete commercial solutions.  It would be inherently scalable, from small, early test stands of a single computer with an appropriate network card, to large complex detectors.   Applications are invited in the following areas; 1) the development of data acquisition system and control systems above and beyond the classic SCADA architecture; 2) soft core FPGA module(s) to implement the network protocol for Ethernet and/or Infiniband, able to drive existing and emerging commercial network chips, with sufficient buffering to support data aggregation using a commercial network switch, and with sufficient speed to drive 40 Gb/sec network links; 3) time distribution protocols and hardware to support fine_grained time tagging of each network packet for later correlation and event or frame assembly, again with FPGA integration, and possibly exploiting the commercial network for some aspects of tagging.   Questions ? Contact: Manouchehr.farkhondeh@science.doe.gov. Also can contact the NP Topic Associate (TA) listed at the beginning of the References section for this topic.  c. Heterogeneous Concurrent Computing  Computationally demanding theory calculations as well as detector simulations and data analysis tasks can be significantly accelerated through the use of general purpose Graphics Processing Units (GPUs).  The ability to exploit these graphics accelerators is constrained by the effort required to port the software to the GPU environment. More capable cross compilation or source_to_source translation tools are needed that are able to convert very complicated templatized C++ code and into high performance codes for heterogeneous architectures.   Early work by the USQCD (US Quantum Chromo Dynamics) collaboration has demonstrated the power of clusters of GPUs in Lattice QCD calculations.  This early work was workforce intensive, but yielded a large return on investment through the hand optimization of critical numerical kernels, achieving performance gains of up to 60x with 4 GPUs. Utilizing High Performance Computing (HPC) and Leadership Computing Facilities (LCFs) is of growing relevance and importance to experimental NP as well. NP codes written or rewritten to have the concurrency needed to perform well on prevalent and emerging multi_ and many core architectures can potentially utilize HPC effectively. Experiments are increasingly invited and encouraged to use such facilities, and DOE is assessing the needs of computationally demanding experimental activities such as data analysis, detector simulation, and error estimation in projecting their future computing requirements. Tools and technologies that can facilitate efficient use of HPCs and LCFs for the applications and data_intensive workflows characteristic of experimental NP are in the scope of this subtopic.  Questions ? Contact: Manouchehr.farkhondeh@science.doe.gov. Also can contact the NP Topic Associate (TA) listed at the beginning of the References section for this topic.  d. Other  In addition to the specific subtopics listed above, the Department invites grant applications in other areas that fall within the scope of the general description at the begining of this topic.  Questions ? Contact: Manouchehr.farkhondeh@science.doe.gov.    

Contact Us
  • This field is for validation purposes and should be left unchanged.