
More and different types of data could be handled at one time to avoid data bottlenecks.

A Texas Tech University computer scientist received a $1 million grant from the National Science Foundation (NSF) to create a faster, better method for supercomputing.
Yong Chen, an assistant professor of computer science and director of the Data-Intensive Scalable Computing Lab, will lead a team of researchers to develop a new concept called “compute on data path” toward “data-centric” computing that assimilates and analyzes more and different types of data used in scientific discovery and does so all at one time.
“This is a sizable grant awarded from a very competitive NSF core program, and we deeply appreciate the support to our work from the NSF and the recognition from our peers,” Chen said. “We are primarily doing research on trying to address data-intensive scientific computing needs to create 'data-centric computing' for better scientific discovery and innovation.”
Chen said he and other scientists will lay groundwork for a new data assimilation computing concept capable of combining data that may not be similar.
“At this stage, this is more about a methodology development than creating an actual supercomputer,” he said. “This is more how an investigation to see whether a new concept is feasible and whether a change can be made to current software stack to make it a more data-centric way to have significantly better productivity in scientific discovery.”
Supercomputing has become a popular and useful tool to conduct computer simulations and data analysis for scientific discovery in many fields, including climate sciences, healthcare, biology, chemistry and astrophysics. The problem is, Chen said, the methods used to conduct simulations and analyses are “computing-centric.”
As data volume grows over time in this “computing-centric” model, the data floods into the computer system, creating a bottleneck of information.
“The traditional computing-centric method is not really the best way for today's 'data-intensive' scientific discovery,” he said. “This three-year project will develop new concepts and methodologies of 'data-centric' solutions. We're going to model computations and data as objects and move the computation objects to the data objects instead of moving the data to the computations. We will try to make the computations happen right in place with the data for better performance, efficiency and productivity in scientific discovery via computer simulations and analyses.
“If everything goes well, this could make a significant impact to the supercomputing field.”
More Stories
Texas Tech Honored Two Straight Years for Online Programs
Texas Tech Joins Consortium of Experts Called Cloud and Autonomic Computing Center