The INI has a new website!

This is a legacy webpage. Please visit the new site to ensure you are seeing up to date information.

Skip to content

HRT

Seminar

What are we going to need to keep computing turbulence, and what can we get in return

Jimenez, J (Universidad Politecnica, Madrid)
Friday 26 September 2008, 13:30-14:15

Seminar Room 1, Newton Institute

Abstract

Direct numerical simulation has been one of the primary tools of turbulence research in the last two decades. Since the first simulations of low-Reynolds-number turbulent flows appeared in the early 1980's, the field has moved to higher Reynolds numbers and to more complex flows, until finally overlapping the lower range of laboratory experiments. This has provided a parametric continuity that can be used to calibrate experiments and simulations. It has turned out that, whenever both are available, simulations are usually the more consistent, mainly because they have essentially no instrumental constraints, and because the flow parameters can be controlled much more tightly than in the laboratory (although both are not necessarily equivalent). Perhaps more important is that simulations afford a degree of control over the definition of the physical system that is largely absent in the laboratory, allowing flows to be studied "piecemeal", and broken into individual "parts". We are now at the point in which these techniques can be applied to flows with nontrivial inertial cascades, thus providing insight into the "core" of the turbulence machinery. This has been made possible by the continued increase in computer power, which has roughly doubled every year, providing every decade a factor of 1000 in computing speed, and a factor of ten in Reynolds number. Software evolution has also been important, and will continue to be increasingly so. Numerical schemes have changed little, typically relying on high-resolution methods which require smaller grids, but hardware models have moved from vector, to cache-based to highly parallel, all of which have required major reprogrammings. Lately, most of the speed-up has come from higher processor counts and finer granularities, which interfere with the wide stencils of high-resolution methods. The trend towards complex flow geometries also pushes numerical schemes towards lower orders. This may bring at least a temporary slow-down in the rate of increase of Reynolds numbers. Moving from spectral to second-order methods typically requires a factor of three in resolution, or a factor of 80 in operation count. This is about six years of hardware growth. Another limiting factor is data storage and sharing. Typical simulations today generate Terabytes of data, which have to be archived, postprocessed, and shared with the community. This will increase to Petabytes shortly, especially if low-resolution grids are required. There are at present few archival high-availability methods for these data volumes, all of them expensive, and essentially no way to move the data among groups. Problems of this type have been common during the last two decades, and they have been solved. They will no doubt also be solved now, but they emphasise that simulations, although by now an indispensable tool of turbulence research, will continue to be far from routine for some time.

Presentation

[pdf ]

Video

The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.

Back to top ∧