Halcyon. Posted February 19, 2016 Share Posted February 19, 2016 A significant percentage of the world’s supercomputing horsepower is in the United States, often in facilities owned or operated in partnership with the US government, as the TOP500 list highlights twice per year. The Department of Energy wants to make some of that computing power available to conventional companies to improve energy efficiency and reduce waste, and it’s announced its first round of partners. A second wave of proposals will be considered starting next month. The government has agreed to sign cooperative research and development agreements (CRADA) in which it will provide $300,000 in funding for each initiative, while the companies in question contribute at least $60,000 in funding (or an equivalent contribution). The program is known as HPC4Mfg, and while the exact terms of each agreement were not disclosed, here are some of the winning proposals: The Agenda 2020 Technology Alliance will work with Lawrence Livermore National Labs and Lawrence Berkeley National Labs to find methods of increasing the amount of solid paper content that enters the drying section of the process from 45-55% up to 65%. Currently, the US pulp and paper industry collectively the third-largest manufacturing use of energy in the country; the industry hopes to cut the amount of energy it uses to dry pulp by 20%. The steel industry (the fourth largest energy consuming industry in the US) wants to reduce the amount of coke (fuel) it uses by roughly 30%. This could reduce costs by up to $894 million per year. GlobalFoundries will collaborate with Lawrence Berkeley on a project called “Computation Design and Optimization of Ultra-Low Power Device Architectures.” GE will also partner with Oak Ridge and Lawrence Livermore to design more efficient aircraft engines and hopefully improve durability and lifespan as well. That particular project is called the “Massively Parallel Multi-Physics Multi-Scale Large Eddy Simulations of a Fully Integrated Aircraft Engine Combustor and High Pressure Vane.” More details on additional projects can be found on the Lawrence Livermore National Laboratory pages. Beyond Moore’s law Earlier this week, we ran a story on the myths of Moore’s law and the kinds of scaling to expect in [CENSORED]ure devices as conventional transistor scaling comes to an end. These kinds of collaborative projects represent a different type of scaling than what we typically consider at ExtremeTech, but one that may become more important in the years ahead. As supercomputers push towards exascale levels of computing, there’s a real opportunity to apply that power to considering complex models and problems that were previously too difficult or expensive to model. The companies partnering with the DoE are all significant players in their respective fields, but that doesn’t mean they’re familiar with the HPC industry. Building processors used in supercomputers, it turns out, isn’t the same thing as knowing how to use supercomputers to optimize processors. In the old days, when computing performance roughly doubled every 18-24 months, there was little incentive to squeeze every last ounce of performance out of any given server, workstation, or desktop. Now, that kind of optimization is one way we’ll increase performance in the [CENSORED]ure — and the government’s research laboratories are going to play a critical role in extending the benefits of HPC modeling to companies that can benefit from them, but can’t afford the substantial cost of purchasing, programming, and maintaining a computer large enough to make the TOP500. If these first 20 programs pay dividends, we’ll hopefully see similar pilot programs in other areas. Considering the modest up-front costs, the benefits would pay for themselves in short order. Link to comment Share on other sites More sharing options...
Recommended Posts