Sieger der TUHH Startup Spirit-Projekte gekürt

StartupSpirit2015

Die von meinem Team untersuchte Geschäftsidee zu “Supercomputing on the Desktop” wurde im Rahmen des TUHH Startup Spirit Praxisprojekts ausgezeichnet.

Auszüge aus dem Newsletter des TUHH Startup Docks:

And the winner is… „Supercomputing on the desktop” von Dr. Christian Janßen (2.v.l.) vom TUHH-Institut für Schifftheorie und Fluiddynamik mit seinem Studententeam (v.l.) Katharina Kharkevitch, Fabian Schwarz, Jan Schekatz und Vincent Mayer.

Beim ersten Durchgang des Startup Spirit-Praxisprojekts setzten sich die Fünf im vergangenen Wintersemester unter 36 Projektteams mit insgesamt 212 teilnehmenden Bacherlorstudenten durch. Hinter ihrer Idee steckt eine Hardware-Software-Kombination, die unter anderem die eigens geschriebene innovative „Elbe-Software“ enthält. Diese ermöglicht Kunden, numerische Simulationen in kurzer Berechnungszeit und zu kostengünstigen Konditionen zu berechnen. Im Rahmen der Vorlesungsreihe „Grundlagen der Betriebswirtschaftslehre“ hatten Bachelorstudierende am TUHH Institute of Entrepreneurship (TIE) die Möglichkeit Geschäftsmodelle für reale Gründer zu entwickeln.

Als weiteres ‚Schmankerl’ für die Studierenden stellt Dr. Christian Janßen die Ergebnisse des Projekts im Rahmen des CAE-Forums, der Kommunikations- und Netzwerk-Plattform für Simulationsexperten, auf der Hannover Messe vor.

Hamburg University of Technology named NVIDIA CUDA Research Center

TUHH_Logo

Hamburg, Germany — January 8, 2015 — Hamburg University of Technology (TUHH) today announced that it has been named a CUDA® Research Center by NVIDIA, the world leader in visual computing.

The CUDA parallel programming model is an important element of the NVIDIA accelerated computing platform, the leading platform for accelerating data analytics and scientific computing. CUDA enables programmers to achieve dramatic increases in computing performance by harnessing the power of NVIDIA® GPU accelerators.

CUDA Research Centers are institutions that embrace and utilize GPU-accelerated computing across multiple research fields, and are at the forefront of some of the world’s most innovative and important scientific research. Hamburg University of Technology (TUHH) was recognized for its advanced research in computational fluid dynamics at the Institute for Fluid Dynamics and Ship Theory using GPU accelerators. Emphasis is put on Lattice Boltzmann simulations of two-phase flows in conjunction with dynamic overset grids for multiple floating bodies. Applications are primarily marine and ocean engineering oriented, e.g. simulation based early Tsunami warning systems or ice-going ship operations.

Supercomputing on the Desktop offers a new arena for the application of simulation based sciences to a broad range of future challenges, and might have a substantial social and economic impact, says Prof. Garabed Antranikian, President of the TUHH. This designation recognizes the excellence of TUHH at the scientific interplay between engineering, computer science and applied mathematics.

Research is aiming at near real-time simulations and online data exploration using GPU accelerator technology to derive next generation kinetic flow solvers. The vision of the Computational Fluid Dynamics team is to participate in the paradigm shift from traditional mainframes towards office supercomputing.
As a CUDA Research Center, TUHH will have pre-release access to NVIDIA GPU hardware and software, the opportunity to attend exclusive events with key researchers and academics, a designated NVIDIA technical liaison, and access to specialized online and in-person training sessions.

About Hamburg University of Technology (TUHH)
Hamburg University of Technology (TUHH) is one of Germany’s youngest and most successful research driven universities. Plans for a university of technology in the Süderelbe area of Hamburg go back to the 1920s. Fifty years later, in 1978, the Hamburg University of Technology came into being. Now, the TUHH has around 100 professors and 1,150 employees. TUHH is a competitive entrepreneurial university focussing on high-level performance and high quality standards. It is dedicated to the principles of Humboldt (unity of research and education) and aims at excellence at the international level in its strategic engineering research fields.

The CUDA Research Center at TUHH is associated with the Institute for Fluid Dynamics and Ship Theory (FDS) and the CUDA-accelerated flow solver elbe.

For further information, contact:

Dr.-Ing. Christian F. Janßen
Institute for Fluid Dynamics & Ship Theory
(+49) 40 / 42878 – 6040
christian.janssen@tuhh.de
http://www.tuhh.de/elbe

Prof. Dr.-Ing. Thomas Rung
Institute for Fluid Dynamics & Ship Theory
(+49) 40 / 42878 – 6054
thomas.rung@tuhh.de
http://www.tuhh.de/fds

Beitrag im DIGITAL ENGINEERING Magazin 08-2014

Unser Beitrag zum Einsatz von Grafikkartenhardware im automobilen Entwurfsprozess im DIGITAL ENGINEERING Magazin ist jetzt erschienen.

Realitätsprüfung – GPU-Computing in der Automobilentwicklung
Christian F. Janßen and Thorsten Grahs, DIGITAL ENGINEERING Magazin 08-2014, Oktober/November 2014

DEM_08_2014

ONC Tsunami Workshop 2014

In March 2014, 50 years after the 1964 Alaska earthquake and Tsunami, Ocean Networks Canada organized a two-day technical workshop on Tsunami modeling. The workshop covered advanced modeling and prediction technologies for real-time Tsunami forecasting, detection and alerts. In this context, I presented the potential of the elbe solver for efficient and accurate simulations of wave impact and wave-structure interactions in complex three-dimensional topologies, which can be a valuable contribution to Tsunami warning systems in British Columbia.

Further details and the workshop agenda can be found at ONC Tsunami Workshop 2014.

ONC_Tsunami_Workshop_2014

Non-uniform grids in elbe

We recently tested and implemented a grid refinement strategy in the LBM solver elbe. First, the concept was tested in the two-dimensional singlephase module elbeSP2D. The velocity field of a 2D Karman Vortex Street at Re 500 is shown below. After a proper validation of the grid coupling, the interpolation routines will be included in all the remaining elbe modules, including those for free surface and multiphase flows.

elbeSP2D_nuf

elbe team @ GACM 2013

The elbe team will present the latest advances in the field of visualization and fluid-structure interaction at the 5th GACM Colloquium on Computational Mechanics (GACM 2013):

Monday, 15:20 – 15:40:
Nagrelli, Heinrich: GPGPU-accelerated simulation of wave-ship interactions using LBM and a quaternion-based motion modeler

Tuesday, 13:30 – 13:50
Koliha, Nils: Efficient Grid Generation and On Device Visualization for Massively Parallel GPU Accelerated Lattice Boltzmann CFD Simulations

Monday, 18:10, Poster Session
Koliha, Nils: Efficient Grid Generation and On Device Visualization for Massively Parallel GPU Accelerated Lattice Boltzmann CFD Simulations (Poster/Live Demo)

Moreover, this year’s symposium on Fluid Mechanics is organized by Christian Janßen and Thomas Rung (symposium #12, GACM web page). The symposium features 18 talks in four sessions on Hybrid Methods, the Lattice Boltzmann Method, Optimization, and FEM and ACM. It will take place on Tuesday and Wednesday from 10:30 to 12:10 and from 13:30 to 15:10 in building N, room 0.007 (campus map).

elbe

During my last year at the Institute for Fluiddynamics and Ship Theory at TUHH, I worked on refactoring, extending and combining my previously written GPGPU codes into one single code framework. The result is

elbe – the efficient lattice boltzmann environment

from Hamburg. It features 1D, 2D and 3D Lattice-Boltzmann kernels for various different physics, from the depth-averaged one-dimensional shallow water equations to three-dimensional multiphase models. elbe has already been used in a couple of B.Sc. and M.Sc. projects and for teaching purposes. Further information on the code, a list of publications publications and a multimedia gallery is now available at

http://www.tuhh.de/elbe

Testing the new GeForce GTX Titan

We recently tested the new NVIDIA Kepler boards for our GPGPU flow solver elbe. After initial tests on the Kepler K20 boards were very promising and led to speed-up factors of up to 2.5x (without any code modifications), we decided to purchase the new GeForce GTX Titan board to see if the consumer hardware can keep up with the professional boards. This consumer card features 2688 streaming processors, 6 GB device memory and single- and double-precision compute capability.

Running the elbe code for both 2D/3D and single/double precision calculations, we found a performance of up to 1.5 GNUPS (giga node updates per second), 2.5x more than on our recent Tesla C2075 GPGPUs, for a D2Q9 singlephase LBM calculation.

Further reading:

  • http://www.nvidia.de/object/nvidia-kepler-de.html
  • http://www.nvidia.de/titan-graphics-card