Péter Maller\(^1\), Emese Forgács-Dajka\(^1\), Dániel Berényi\(^2\)

  1. Eötvös Loránd University
  2. Freelancer

Abstract: The main goal of the project is to parallelize a well-known Babcock-Leighton solar dynamo model, which can be used to study the development of the Sun's global magnetic field, thus solar activity. The prediction of solar activity is still challenging, as a quasi-periodic, stochastic process is in the background. In addition, the dynamo, which describes the underlying physics, is still one of the great unsolved problems of astrophysics. Of course, this does not mean that we do not have ideas or even models regarding the development of the magnetic field, but these models require further investigation and development.

The numerical code we created was based on an earlier Fortran language program, developed several decades ago, the modernization of which was motivated by several things: on the one hand, there are redundancies in the code written by many people over a long period of time, but also parts that are apparently redundant, on the other hand, further development is difficult due to the structure of the code. Thus, our first goal was to optimize and refactor the previous code, for which we chose the C programming language. Next, we want to parallelize the code, for which we use the CUDA framework. The reduction in running time achieved by parallelization enables comprehensive analyses: we can examine the development of several components of the magnetic field at a higher spatial resolution, but we can also map the parameter space of the model. Among our goals is the comparison of different numerical methods, such as ADI (Alternating-Direction Implicit) and FTCS (Forward Time-Centered Space). Overall, during the implementation of the project, we want to explore different options in order to choose the right compromise solution in terms of performance, accuracy and future improvements.

Örs Legeza (2023.11.01 - 2024.04.30)

Wigner Research Centre for Physics

Publications:
[1] Parallel implementation of the Density Matrix Renormalization Group method achieving a quarter petaFLOPS performance on a single DGX-H100 GPU node
[2] Two-dimensional quantum lattice models via mode optimized hybrid CPU-GPU density matrix renormalization group method
[3] Boosting the effective performance of massively parallel tensor network state algorithms on hybrid CPU-GPU based architectures via non-Abelian symmetries
[4] Massively Parallel Tensor Network State Algorithms on Hybrid CPU-GPU Based Architectures

Abstract: Numerical simulation of quantum systems in which correlations between electrons are strong, i.e., they cannot be described by perturbation theory is in the focus of modern physics and chemistry. This, however, poses major challenge as the computational complexity usually scales exponentially with system size. Therefore, those algorithms in which such scaling can be reduced to polynomial form is subject of intense research.

The density matrix renormalization group (DMRG) method fulfills such criteria. In addition, the related matrix and tensor algebra can be organized into millions of independent subtaks, that makes the method ideal for massive parallelization. Using our code, during the first phase of the project (2021-2022) we have already performed large scale simulations on various quantum systems which lead to two publications accessible on arXiv:

[1] Massively Parallel Tensor Network State Algorithms on Hybrid CPU-GPU Based Architectures, Andor Menczer, Örs Legeza, arXiv:2305.05581 (2023)

[2] Boosting the effective performance of massively parallel tensor network state algorithms on hybrid CPU-GPU based architectures via non-Abelian symmetries, Andor Menczer, Örs Legeza, arXiv:2309.16724 (2023)

The GPU Laboratory is explicitly cited in the acknowledgement in Ref.[1] as part of the results were generated via project phase-1. In the second phase of the project we aim to further test our simulations using A100 GPU based infrastructure. Depending on the results we intend to update or extend results reported in Ref.[2].

Balázs Szigeti, István Szapudi, Imre Barna, Gergely Gábor Barnaföldi (2024.08.01-2024.10.30.)

Abstract: The Hubble constant \(H_0\) characterizes the rate of the universe's expansion. The discrepancy between the low and high redshift measurements of \(H_0\) is the highest significance tension within the concordance \(\Lambda\)CDM paradigm. We show that a G\"odel inspired slowly rotating dark-fluid variant of the concordance model resolves this tension with an angular velocity today \(\omega_0 \simeq 2\times 10^{-3}\)~Gyr\(^{-1}\). Curiously, this is approximately also the maximal rotation with a tangential velocity less than the speed of light at the horizon.

Aneta Magdalena Wojnar, Anna Horváth, Gergely Gábor Barnaföldi (2024.08.01-2024.10.30.)

Abstract: Heisenberg's uncertainty relation can be modified in strong gravitational field. This study aims to investigate the same theoretical aspects in the 5-dimensional Kaluza--Kleing spacetime.

Anna Horváth [1,2], Gergely Gábor Barnaföldi [1], Emese Forgács-Dajka

[2] (2024.09.01-12.31)

[1] Wigner Research Centre for Physics [2] Eötvös Loránd University

Abstract: We are investigating compact stars within a static, spherically symmetric Kaluza-Klein-like theory that encompasses several extra compactified spatial dimensions. We produced an equation of state that can be used to model neutron stars together with the Tolman-Oppenheimer-Volkoff equation. This project tests theories beyond the standard model of particle physics, with an emphasis on the possibility of giving constraints on the size of one extra compactified spatial dimension.

Neelkamal Mallick [1], Suraj Prasad [1], Aditya Nath Mishra [2,4], Raghunath Sahoo [1] and Gergely Gábor Barnaföldi [3] (2024.05.01 - 2024.08.31)

[1] Department of Physics, Indian Institute of Technology Indore [2] Department of Physics, School of Applied Sciences, REVA University [3] Wigner Research Center for Physics [4] Department of Physics, University Centre For Research & Development (UCRD), Chandigarh University

Publication: Anisotropic flow fluctuation as a possible signature of clustered nuclear geometry in O-O collisions at the Large Hadron Collider

Abstract: A nucleus having 4n number of nucleons, such as 8Be, 12C, 16O, etc., is theorized to possess clusters of α particles (4He nucleus). In this study, we exploit the anisotropic flow coefficients to discern the effects of an \(\alpha\)-clustered nuclear geometry w.r.t. a Woods-Saxon nuclear distribution at \(\sqrt{s_{NN}} = 7\) TeV LHC energy.

Antal Jakovác, Anna Horváth, Bence Dudás (2024.07.01-09.30)

Abstract: Environmental sound sample analysis using artificial intelligence methods for applied research.

Dániel Léber , Mihály Ormos (2024.07.01-09.30)

Abstract: We focus on entropy as a measure of risk and what role it can play in equilibrium asset pricing. Similar to the traditionally used capital asset pricing model (CAPM), the entropy can also be divided into mutual (a measure of the non-diversifiable risk) and conditional (a measure of the comovement with the market portfolio) components. We investigate what is the relationship between these and the conventionally used risk metrics, like standard deviation and Beta. We also propose a better solution to the notorious puzzles of asset pricing. Entropy as a measure of risk has been already described and its advantages in portfolio optimization and risk management are also acknowledged in the economic literature. We use data from the OpenBB database and Kenneth R. French’s data library to calculate daily returns and the various risk measures associated with them. We show the diversification effects of different risk measures and their stability over time. We introduce a new method to separate individual and systemic risks of the assets. We also validate our model using the conventional test of the CAPM model. Our regression-based results are tested both in-sample and out-of-sample. The robustness of our model is evaluated by both cross-validation and the use of the rolling windows over time.

Zoltán Lehóczky, Márk Bartha (2024.01.01 - 05.31)

Lombiq Ltd.

Link: GPU Day Chase Study

Abstract: GPU Day is a conference organized by the Wigner Scientific Computational Laboratory that focuses on massively parallel computing, visualization, and data analysis in both scientific and industrial applications. We also presented our Hastlayer .NET hardware accelerator project many times there too.

The website serves as an information hub for these annual conferences. It was initially running on Orchard 1 DotNest, but now it was time to migrate it to Orchard Core. While these migrations always come with certain challenges due to the new features introduced in Orchard Core, we tried to keep things easy by not changing the frontend of the site, even though it's somewhat outdated.

Szabó, Vencel (ELTE); Barbola, Milán Gábor (ELTE); Méhes, Máté (ELTE); Gábor Papp (ELTE), Bíró, Gábor (Wigner); Jólesz, Zsófia (ELTE-Wigner); Dudás, Bence (ELTE-Wigner) (2024.03.01 - 2024.06.30)

Abstract: Proton Computer Tomography (pCT) differs from the "normal" photon-based CT, since the basic reaction with matter differs: while in pCT the small angle Coulomb scattering is the dominant process, in (photon) CT the incoming photon is absorbed. That makes pCT a much harder problem.

During the project the students generate input data for the pCT algorithm, running massively the GATE simulation software on different phantoms. Evaluating the inputs with the Richardson-Lucy algorithm we determine the number of runs at different positions and angles to obtain az accetable resolution of the image. Futher plans involve the optimization of the Richardson-Lucy algorithm on GPU cluster to speed up the calculations. Furthermore, they also try to reconstruct the pCT input data from the detector outputs.

Ádám Kadlecsik (2023.11.01 - 2024.03.31)

Eötvös Loránd University

Abstract: The observed small, thus usually solid exoplanets in general orbit their central star closely - making them easier to detect with terrestrial and space instruments. This means that they must be tidally locked, meaning their orbit around their central star ("year") and their rotation around their axis ("day") have the same period. Because of the tidally locked orbit the exoplanet shows its same side to the star, thus the planet has a permanent day and night hemisphere. Ergo the flow can be modeled with a rotating layout, where the lateral boundary rotating with the water body simulating the atmosphere has an azimuthal dipole-like heat flux boundary condition. This can be investigated using experimental and simulational methods as well.

Ákos Gellért[1,2] , Oz Kilim[1] , Anikó Mentes[1] and István Csabai[1] (2023.02.15 - 2023.12.15)

[1] ELTE Department of Physics of Complex Systems
[2] ELKH Veterinary Medical Research Institute

Abstract: The first recorded pandemic of the flu occurred in 1580 and since then, flu pandemics have occurred several times throughout history, with the most severe being the Spanish flu in 1918-1919 which killed millions of people worldwide. In the 20th century, significant progress was made in the understanding of the virus and the development of vaccines, which have greatly reduced the impact of flu pandemics. Despite this progress, the flu continues to be a major public health issue, with millions of cases reported each year and an annual death toll in the tens of thousands.

Hemagglutinin, a surface membrane protein of the Influenza virus plays an important role in the infection process of the virus, as it allows the virus to attach to and penetrate host cells. The flu vaccine is formulated each year based on which strains of the virus are predicted to be most prevalent, and it is designed to stimulate the body's immune response to the hemagglutinin protein on those strains. Many antigenic maps have been constructed this far, which reveal the relationships between different strains of a virus, specifically with regards to the way their antigens [1] (e.g., hemagglutinin) are recognized by the immune system. Experimental Influenza HA deep mutational data [2] are also available for the research community to explore the virus functions.

In this project, we aim to in silico combine antigenic maps and deep mutational scanning data to obtain a more comprehensive understanding of the evolution and functional properties of Influenza virus. For example, combining antigenic map data with deep mutational scanning data can provide information about how different mutations affect the ability of a virus to evade the immune response, as well as which regions of the virus are critical for this evasion. This information can be used to inform the design of vaccines and antiviral drugs that target specific regions of the virus that are critical for its function and evolution. We will use AlphaFold2 [3] and ESMFold2 [4] the fastest AI based and most reliable protein structure prediction applications in the world to generate single and/or multiple mutant structures of various Influenza HA protein.

[1] Antigenic map.
[2] Flu HA DMS..
[3] J. Jumper et al., “Highly accurate protein structure prediction with AlphaFold,” Nat. 2021 5967873, vol. 596, no. 7873, pp. 583–589, Jul. 2021, doi: 10.1038/s41586-021-03819-2.
[4] ESMFold.