tetmag documentation
tetmag is a micromagnetic finite-element simulation software. It simulates the magnetization field and its evolution in time in three-dimensional ferromagnetic nanostructures of arbitrary shape. As a general-purpose micromagnetic solver, tetmag can treat a large variety of standard tasks in micromagnetic simulations, such as calculating static magnetization textures, hysteresis loops, dynamic switching processes, magnetic high-frequency oscillations, or the current-driven magnetization dynamics.
By exploiting advanced data compression methods, highly efficient solvers and integrators, as well as GPU acceleration, tetmag is suitable for high-performance computations and large-scale micromagnetic simulations.
The source code is available under AGPL-3.0 license at https://github.com/R-Hertel/tetmag .
Installation Guide
The tetmag
software employs various programming languages, with the main part being object-oriented C++, some modules written in C, and the GPU-related modules programmed in CUDA / Thrust. Moreover, it uses several dependencies and external numerical libraries. Although care has been taken to facilitate the installation process as far as possible, one may encounter difficulties in combining the various dependencies, preparing the necessary environment, and compiling the software. This installation guide describes the main steps required to compile tetmag
. We will focus on the Linux operating system, and more specifically on the ubuntu distribution. Nevertheless, the software and its dependencies are platform-independent. It should thus be possible to install tetmag
on other operating systems. While we primarily use ubuntu Linux, we have succesfully compiled tetmag
also on MacOS and other Linux distributions.
Prerequisites
Most of tetmag
's dependencies are available as precompiled packages that can be readily installed from software repositories. However, in some cases (which we will discuss below), we recommend installing the software from the source to reduce the risk of version mismatches which could lead to situations where some dependencies require more recent versions of other dependencies than those provided by the repositories. We will start by installing a few packages from the ubuntu repository:
apt-get install build-essential git wget
apt-get install libnetcdf-dev liblapack-dev libglu1 libpthread-stubs0-dev
apt-get install libxrender-dev libxcursor-dev libxft-dev libxinerama-dev
apt-get install qtbase5-dev qtdeclarative5-dev
CMAKE
In the case of CMAKE, instead of using a packaged version available through repositories, we recommend to install a recent version of CMAKE from source. The installation process is described here and is usually unproblematic. Once the process is completed, the correctness of the CMAKE installation can be checked at the command-line interface with
cmake --version
As of Juli 2023, we recommend a CMAKE version of 3.21 or higher. CMAKE is required for the compilation process of tetmag
and some of its dependencies. An insufficient CMAKE version number is a frequent reason for diffuculties when installing tetmag
.
VTK
The VTK library is used by tetmag
to read and store data related to finite-element meshes, such as calculated magnetic configurations at mesh points, or the mesh itself. It is not necessary to install VTK
from source. We can use the package provided by ubuntu:
apt-get install libvtk9-dev
At the moment of writing of this installation guide, the most recent major version of VTK is number 9. In the future, as other versions will likely become available, the libvtk
version should be updated accordingly. In any case, we need the “dev” version of this library.
BOOST
The BOOST libraries provide important C++ utilities used by temag
. We don’t need to install this huge library from source, but can instead rely in this case, too, on the packaged version provided by the ubuntu repository:
apt-get install libboost-all-dev
EIGEN
The headers-only C++ library EIGEN can also be installed using the corresponding package provided by the repository:
apt-get install libeigen3-dev
However, considering the easiness by which this headers-only library can be installed, it may be worthwhile downloading the most recent version of this library.
CUDA
Installing the CUDA toolkit is optional, but recommended. The toolkit offers the possibility to accelerate the simulations by performing a large portion of the computations on the graphical processing unit (GPU) rather than on the CPU, which can result in significantly increased computation speeds, especially in the case of large-scale problems.
For this feature, the system requires a CUDA-capable GPU, and the CUDA SDK must be installed. By deault, tetmag
runs on the CPU. The GPU option is activated by adding an entry solver type = gpu
in an input file, as discussed in the examples section.
Although ubuntu offers repository packages for CUDA (apt-get install nvidia-cuda-toolkit
), it is preferable to install a CUDA version provided the NVIDIA website. We recommend using version 10.2 or higher. The NVIDIA site from which the CUDA Toolkit can be downloaded provides a simple installer and detailed installation instructions. Note that the system must be rebooted after the installation process.
tetmag compilation
Once the prerequisites listed above are met, compiling tetmag should be straightforward:
git clone https://github.com/R-Hertel/tetmag.git
cd tetmag && mkdir build && cd build
cmake ..
make -j$(nproc)
Note that an internet connection is necessary during the build process, since tetmag
will download and install various external libraries.
The compilation should end with a statement that the executable tetmag
has been built:
...
[ 87%] Linking CXX executable tetmag
[100%] Built target tetmag
The executable file can be moved or copied to a directory stored in the $PATH environment variable, e.g., ~/bin/, in order to make tetmag
accessible in all directories without specifying the full path of its storage location.
Compilation with CUDA
As explained before, the tetmag software can exploit GPU acceleration based on NVIDIA’s CUDA toolkit. This feature requires a CUDA-compatible graphics card and an installation of CUDA. To compile a version of tetmag with optional CUDA acceleration, the procedure is as described above, except for an additional compiler flag -DUSE_CUDA which should be passed to cmake:
cd tetmag && mkdir build && cd build
cmake -DUSE_CUDA ..
make -j$(nproc)
With the USE_CUDA compile option activated, tetmag
should automatically detect the available GPU architectures and compile the code accordingly if the TETMAG_FIND_CUDA_ARCH
flag is set. Otherwise, the executable will be compiled for a set of common architectures, listed in the variable GPU_ARCHITECTURES
in the file ./gpu/CMakeLists.txt
. It may be necessary to add to this list the architecture (compute capability) of your machine’s GPU.
Host compiler compatibility
The standard compilation without CUDA should be unproblematic, but generating a version capable of CUDA-acceleration can be more complicated. Especially in the case of older CUDA versions, a frequent reason for difficulties is an incompatibility of host and device compiler versions. The compiler requirements for different CUDA distributions are summarized in a table in this gist.
It can occur that the version of the host compiler, e.g., g++, is too recent for the installed CUDA version. If, for example, the output of g++ --version
(entered on the command line prompt) yields 9.3.0 and nvcc --version
gives V10.2.89, then the default g++ compiler cannot be used. As indicated in the table referenced above, which states that this CUDA version needs a g++ version 8 or lower.
In such a configuration, tetmag
’s attempt to use the CUDA compiler nvcc will fail and produce an error message like this:
#error -- unsupported GNU version! gcc versions later than 8 are not supported!
To solve this problem, an older version of the host compiler must be installed. This can be done without necessarily downgrading the standard compiler, e.g., by installing g++
version 7 alongside the default g++
version 9.3.0.
Once a compatible host compiler is available, tetmag
needs to know where to find it. This information can be passed with the flag TETMAG_HOST_COMPILER. Assuming that the g++-7
compiler is located in /usr/bin/, the compilation would be done with
cd tetmag/build
cmake -DUSE_CUDA -DTETMAG_HOST_COMPILER="/usr/bin/g++-7" ..
make -j$(nproc)
Instead of using compiler flags, a convenient way to set various parameters and options for the compilation with cmake
and make
is to use ccmake . Most of the options displayed by the ccmake
user interface refer to external libraries and usually can be left unchanged. The settings of the compilation specifically related to tetmag
are stored in the variables named TETMAG_*
.

Getting started with tetmag
Simulations with tetmag will generate a number of output files. To avoid confusion, each simulation project should be run in a dedicated directory.
To run a simulation with tetmag
, three specific files are required, which must be stored in the working directory:
simulation.cfg
material001.dat
<name>.msh
or<name>.vtk
or<name>.vtu
Here <name>
is the name of your simulation project. The first two files are in human-readable ASCII format. They provide input data and information about the simulation parameters and the material properties, respectively. The third file in the list contains data on the finite-element mesh of the simulation geometry. The mesh data can be supplied in GMSH, VTK or VTU format.
The following sections will provide a few examples with a step-by-step description on how to conduct various simulations with tetmag
, thereby demonstrating the use and importance of these input files. The examples will also discuss how the computed output files can be analyzed.
Ex. 1: Magnetic vortex in a nanodisk
In this example, we will simulate the zero-field magnetic configuration in a Permalloy disk, which will result in a vortex state. First, we define the sample geometry and the finite-element mesh. In this example we will do this with FreeCAD. Other options, such as gmsh can also be used.
Defining the geometry
In FreeCAD, open a new project and select the “Part” workbench. From there, a cylinder shape can be selected as one of the default geometries:

The radius and height of the cylinder geometry can be easily adapted in FreeCAD’s Properties panel. We will use a radius of 150 units and a thickness of 30 units (the fact that FreeCAD considers these length units to be measured in mm is irrelevant for the moment).

Generating the finite-element mesh
FreeCAD provides plugins to gmsh and netgen, two efficient finite-element mesh generators. The mesh generators become accessible by changing to the “FEM” workbench:

The icons with the letter “N” and “G” refer to netgen and gmsh, respectively. In this example, we will use netgen. In the panel on the left, we select first-order elements by unchecking the “Second order” box, and set the maximum element size to 4,00:

The resulting mesh contains about 140,000 tetrahedral elements. Returning to the main panel, the finite-element mesh can be exported by selecting the object and navigating to “File -> Export…” In the pulldown menu, select the file type “FEM mesh formats” and name the output file disk.vtk

We now have the first input file of our simulation, disk.vtk
, which stores the FEM model and the sample geometry.
Defining the material properties
The magnetic material of our nanodisk is Permalloy, whose micromagentic properties are characterized by a ferromagnetic exchange constant \(A = 1.3 \times 10^-11\) J/m and a spontaneous magnetization \(M_s = 800\) kA/m. The uniaxial anisotropy constant \(K_u\) is negligibly small; we will set it to zero by simply omitting its value in the parameter definition file.
The information on the material properties is stored in an ASCII file named material001.dat
, which can be generated using your favorite file editor. In our example, the file should contain these two lines:
A = 1.3e-11
Ms = 8.0e5
Defining the simulation parameters
The next step is to specify what we want to simulate. The information pertaining to the
simulation parameters is stored in an ASCII file with the name simulation.cfg
.
In our case, the file looks like this:
name = disk
scale = 1.e-9
mesh type = vtk
alpha = 1.0
initial state = random
time step = 2.0 # demag refresh interval in ps
torque limit = 5.e-4
duration = 5000 # simulation time in ps
solver type = gpu
The meaning of the entries is as follows:
name
: The name of the simulated object. Must be identical to the stem of the filename containing the FEM mesh data. In our case, we stored the FEM mesh in the file disk.vtk, thus the name is disk.
scale
: The scaling factor relating the length units in the CAD model to the sample’s physical size in [m]. Our disk was modeled with a radius of 150 and a thickness of 30 units. By using a value ofscale
equal to1.e-9
, we specify that our disk has a radius of 150 nm and thickness of 30 nm. With this scaling, the maximum cell we had chosen when generating the mesh is 4,00 nm, which is compatible with the material’s exchange length.
mesh type
: Defines the format in which the finite-element mesh is stored. Possible options areVTK
,VTU
, andMSH
. The latter refers to files stored in GMSH format. The VTK and MSH readers can read any version of these resepctive formats.
alpha
: The Gilbert damping constant \(\alpha\) in the Landau-Lifshitz-Gilbert equation. Although the value of \(\alpha\) is a material-specific constant, it can be used as a control parameter in micromagnetic simulations. It is therefore defined in the simulation.cfg file. Here, for example, we use an unrealistically high damping value \(\alpha=1.0\) in order to accelerate the calculation of the equilibrium state. We recommend using only values \(\alpha \ge 0.01\). Smaller values of \(\alpha\) may lead to numerical instabilities.
initial state
: Micromagnetic simulations are numerical initial-value problems. An initial configuration must be defined, from which the magnetization structure begins to evolve. By default, the initial configuration is a homogeneous magnetic structure aligned along the \(z\) direction. In our simulation, we start from a fully randomized initial state. A number of keywords of theinitial state
entry are available to define a few basic initial configurations. These options will be described in a separate section.
time step
: The name of this entry is a simplification, in the sense that the value does not describe the real size of individual time steps in the integration of the Landau-Lifshitz-Gilbert equation. The step size is chosen adaptively and is typically smaller than this value. Instead, the entry describes the time during which the magnetostatic field is “frozen” as the Landau-Lifshitz-Gilbert equation is integrated. While all other effective fields are continuously updated, the time-consuming calculation of the magnetistatic field is performed only on a subset of time steps. Here, we update the demagnetizing field only once every 2 ps in order to speed up the calculation. For a reliable calculation of the time evolution of the magnetization structure, the refresh time should be significantlmy smaller. A value of 0.1 ps is usually sufficiently small, even for low-damping simulations.
torque limit
: This is a parameter defining a termination criterion of the simulation. The Landau-Lifshitz-Gilbert equation yields a converged, stationary state when the torque exerted by the effective field \(\vec{H}_\text{eff}\) on the magnetization \(\vec{M}\) is zero everywhere in the magnetic material. Numerically, a value of exactly zero is never achieved, but a low value of \(\left\{\max\lVert\vec{M}_i\times\vec{H}_{\text{eff},i}\rVert\right\}\), where \(i\) is a discretization point, indicates a nearly converged state. The entrytorque limit
defines the threshold value of the torque below which a discretized magnetization structure is considered converged. As a general tendency, the maximum value of the torque decreases as the simulation progresses, albeit not always monotonously. Due to numerical effects, the local torque may remain above a certain value even when the magnetic structure is converged and the system’s energy remains constant. Therefore, thetorque limit
criterion will fail if a too small value is chosen. The choice of the threshold value may depend on the material parameters and on the entry oftime step
. In practice,torque limit
values between 1.e-4 and 1.e-3 have proven useful.
duration
: As described in the previous point, the termination criterion based on the local torque is not always reliable. In some cases, a simulation may continue indefinitely if it is not explicitly stopped. The value ofduration
imposes a hard limit on the simulation time, thereby acting as configuration-independent termination criterion. The simulation will end when the physical time in ps described by this value is reached. In our case, the simulation stops after 5 ns if, by then, thetorque limit
criterion has not yet been reached.
solver type
: Specifies whether GPU acceleration should be used. Possible entries are CPU and GPU.
The keyword-type entries in the simulation.cfg
file are case-insensitive. More options than those listed here are available. They will be discussed in other examples.
Running the simulation
The simulation is started by launching tetmag on the command-line interface in the directory containing the above-mentioned input files:

Before the actual simulation begins, the code performs a number of calculations, provides a few notifications and prepares data needed for the simulation. In particular, it sets up a H2-type matrix, which is used to efficiently calculate the magnetostatic interaction. This matrix is stored in a file <name>.h2. When the simulation is re-run, possibly using different simulation parameters, this file is read from the disk, thereby saving the calculation time required to set up this matrix.
Once the preliminary calculations are finished, the simulation starts and outputs several data in the terminal. Each 10 ps of the simulation, the elapsed time in ps, the total energy, the partial energies, and the average reduced magnetization along the x, y, and z directions is printed as an output line in the terminal. In addition, the current value of the maximum torque (see the torque limit section in the previous paragraph) and the simulation rate is displayed as the ratio of simulated time in femtoseconds over real (“wall clock”) time in seconds.
After some time, the simulation finishes and indicates the total simulation time. In our example, the simulation lasted somewhat less than six minutes:

During the simulation, tetmag
has written several output files in the working directory:

Once the micromagnetic simulation is completed, the working directory contains the following additional files:
a series of magnetic configurations, stored as sequentially numbered VTU files,
a file
<name>.log
, anda file
<name>.vtu
.
The VTU files can be analyzed with ParaView, as will be discussed in the next section. The LOG file contains detailed information on the evolution of several micromagnetic parameters during the calculation. The header of the log file explains which data is stored in each column:
# Log file for simulation started Mon Jul 10 17:26:05 2023
# on host euclide by user hertel
# data in the columns is:
#(1) time in ps, (2) total energy , (3) demagnetizing energy, (4) exchange energy
#(5) uniaxial anisotropy energy, (6) Zeeman energy, (7) cubic anisotropy energy (8) surface anisotropy energy,
#(9) bulk DMI energy (10) maximum torque, (11-13) Mx, My, Mz, (14-16) Hx, Hy, Hz. All energies in J/m3, all fields in T
0.0000 3528603.5090 20824.0898 3507779.4191 0.0000 -0.0000 0.0000 0.0000 0.0000 2.847e+01 -0.0008989096018 0.001221334456 -0.002855368502 0.000000 0.000000 0.000000
2.0000 2396019.1388 39078.4496 2356940.6891 0.0000 -0.0000 0.0000 0.0000 0.0000 1.649e+01 0.0001702643459 0.004836598221 -0.005454433698 0.000000 0.000000 0.000000
4.0000 1645403.5929 55883.2351 1589520.3578 0.0000 -0.0000 0.0000 0.0000 0.0000 1.208e+01 0.0009439579436 0.009963304828 -0.01017827253 0.000000 0.000000 0.000000
(...)
By default, the data is stored every two picoseconds. The output frequency in the logfile can, be controlled by adding a line to the simulation.cfg
file specifying the log stride
value. For example, to store the data in the log file once every picosecond, the simulation.cfg
file should contain this line:
log stride = 1 # interval in ps between outputs in log file
Similarly, one can modify the default output frequency in the console (every 10 ps) and regarding the output of the VTU configuration files (every 50 ps) with the keywords console stride
and config stride
, respectively. To obtain VTU files each 60 ps and a line in the console every 20 ps, one would use:
console stride = 20 # interval in ps between outputs in console
config stride = 60 # interval in ps between output of VTU configuration files
Note that anything following an octothorpe sign (#) is ignored in the input files simulation.cfg
and material001.dat
. The comments after this symbol are optional explanations for the user.
The end of the <name>.log
file contains information on the termination of the simulation:
(...)
3118.0000 4883.1332 302.7239 4580.4093 0.0000 -0.0000 0.0000 0.0000 0.0000 2.023e-03 0.001194291178 0.001545875097 -0.003595668125 0.000000 0.000000 0.000000
3120.0000 4883.1330 302.7238 4580.4092 0.0000 -0.0000 0.0000 0.0000 0.0000 6.376e-04 0.001195353567 0.001541824985 -0.003595669787 0.000000 0.000000 0.000000
3122.0000 4883.1327 302.7236 4580.4091 0.0000 -0.0000 0.0000 0.0000 0.0000 4.926e-04 0.001196407227 0.001537789069 -0.003595671424 0.000000 0.000000 0.000000
# Convergence reached: maximum torque 0.000493 is below user-defined threshold.
# Simulation ended Mon Jul 10 17:31:56 2023
In this case, the simulation ended because the torque limit
criterion was met.
Visualizing and analyzing the results
The file <name>.vtu
contains the magnetic configuration of the converged state reached when the micromagnetic simulation finished. It can be opened and viewed with ParaView:

The main information is stored in the vector field “Magnetization”, which is the normalized directional field \(\vec{M}/M_s\). To display this vector field with arrows (“glyphs”, as they are called in ParaView), select the options “Orientation Array-> Magnetization” and “Scale Array -> No scale array”, as shown in the figure.
Visualizing and analyzing the computed magnetization structures with ParaView is an essential part of the workflow when performing simulations with tetmag
. It is therefore important to learn how to use this visualization software. We refer to the ParaView documentation for adetailed description of the usage of ParaView.
In addition to the final, converged state, tetmag
outputs a series of files describing the time evolution of the magnetization during the calculation. This data containing transient, unconverged magnetic configurations is stored in the series of numbered VTU files mentioned before. These files can be opened in ParaView with “File -> Open…” by selecting the group disk..vtu
. The time data is stored in the variable timeInPs
. It can be displayed in ParaView by selecting “Filters -> Annotation -> Annotate Global Data”.

By using the green arrows in the toolbar on the top in ParaView, it is possible to navigate through the series of VTU files and obtain an animation of the magnetization dynamics.
The time evolution of the micromagnetic energy terms and the average magnetization compontents is stored in the <name>.log
file. By selecting the appropriate columns, the data can be plotted with any program that can generate two-dimensional plots. To name a few options, one could use, e.g., gnuplot, Grace, ggplot2 in the case of R, or matplotlib when using python.
In our example, the evolution of the total energy, the magnetostatic (“demag”) energy, and the exchange energy in time looks like this:

Note that the energies are stored as volume-averaged energy densities, expressed in units of [\(J/m^3\)]. The graph above has been generated with this R script:
library(ggplot2)
logdata <- read.table("disk.log")
p <- ggplot(logdata,aes(x=V1)) + geom_line(aes(y=V2, color="V2")) + geom_line(aes(y=V3, color="V3")) + geom_line(aes(y=V4, color="V4")) +
scale_color_manual(values = c("red", "darkgreen", "blue"), labels=c("total", "demag", "exchange")) +
scale_y_log10() + xlab("time [ps]") + ylab(expression(paste("energy density [", J/m^{3}, "]"))) + theme(legend.title = element_blank())
print(p)
Ex. 2: \(\mu\)MAG Standard Problem #4
The \(\mu\)MAG project at NIST has proposed a number of micromagnetic standard problems to compare and cross-validate simulation results obtained with different micromagnetic solvers. Here we will simulate the \(\mu\)MAG Standard Problem #4 with tetmag
. The simulation will involve several steps already discussed in Example #1, such as the sample geometry definition and the FEM mesh generation, which we won’t repeat here.
This standard problem addresses the magnetization dynamics in a sub-micron sized Permalloy thin-film element during a field-driven magnetic switching process. The sample geometry is a platelet of the size 500 nm \(\times\) 125 nm \(\times\) 3 nm. For the discretization, we will use a FEM cell size of \(3.0\) nm. We start by defining the geometry with FreeCAD, as described in the previous example, and generate a tetrahedral finite-element mesh by using FreeCAD’s netgen plugin.

Our mesh contains 48569 tetrahedral elements. We export the FEM mesh file in VTK format. As file name, we choose sp4_.vtk
. The underscore is used here to avoid a definition of the <name>
ending with a number, which would interfere with the numbered output of VTU files generated during the simulation.
Calculating the initial configuration
The material001.dat
file is identical to the one of previous example since, according to the problem specification, the material properties are again those of Permalloy:
A = 1.3e-11
Ms = 8.0e5
The problem specification requires a simulation in two steps. First, a specific zero-field equilibrium configuration, known as “s-state”, is calculated by applying a saturating field along the [1,1,1] direction and gradually reducing it to zero. We will use an external field of \(\mu_0 H=1000\) mT, which we wil reduce in steps of 200 mT to zero. The simulation.cfg
file contains the following entries:
name = sp4_
scale = 1.e-9
mesh type = vtk
alpha = 1.0
initial state = homogeneous_x
time step = 0.5 # demag refresh interval in ps
torque limit = 5e-4
duration = 5000 # simulation time in ps
solver type = gpu
hysteresis = yes
initial field = 1000 # first field value of hysteresis branch, in mT
final field = 0 # last field of hysteresis branch, in mT
field step = 200 # step width of increment / decrement in mT
hys theta = 45 # polar angles of magnetic hysteresis field [deg]
hys phi = 45 # azimuthal angles of magnetic hysteresis field [deg]
remove precession = yes
The hysteresis
keyword indicates that a sequence of equilibrium states will be calculated for different field values, according to the entries in the subsequent lines. The field direction is defined through two angles, theta
and phi
, of a spherical coordinate system, where theta
denotes the angle enclosed between the magnetization and the \(z\) direction, and phi
is the angle between the projection of the magnetization on the \(xy\) plane and the \(x\) axis. We use, again, a high damping constant alpha = 1.0
to speed up the calculation and set the keyword remove precession = yes
to avoid unwanted precessional magnetization dynamics in this part of the calculation. We choose a somewhat smaller value for the time step
than in the previous example to ensure a smooth calcualtion of the magnetization dynamics during the relaxation.
Note
It is strongly recommended to use the option remove precession = yes
whenever the hysteresis = yes
option is set. Otherwise, the torque limit
termination criterion may not be attained during the quasistatic calculation of the magnetization structure.
At the end of this first part of the simulation, we obtain the zero-field “s-state” in the file sp4_.vtu
in the working directory. The subdirectory Hysteresis_
contains data on the converged states at the various field steps. We won’t need this data in this project. We rename the VTU file with the s-state configuration to sp4_s-state.vtu
, delete all the VTU files with transient states, and the LOG file of the calculation:
mv sp4_.vtu sp4_s-state.vtu
rm sp4_00*vtu sp4_.log
We can verify with ParaView that the configuration is sp4_s-state.vtu
contains the expected s-state:

Simulating the switching process
According to the problem specification, a magnetic switching process is initiated by instantanously applying a static magnetic field of 25 mT to the platelet in the s-state configuration, with the field in the \(xy\) plane oriented along a direction enclosing an angle of \(170^\circ\) with the \(x\) axis. Furthermore, the damping constant \(\alpha\) must be set to 0.02.
To use a specific magnetic configuration stored in a VTU file, in our case the s-state from the file sp4_s-state.vtu
, as the starting configuration of a simulation, the name of the file must be provided in the initial state
entry of the simulation.cfg
file, prepended by the expression fromfile_
.
The simulation.cfg
file used to simulate the field-driven switching contains the following entries:
name = sp4_
scale = 1.e-9
mesh type = vtk
alpha = 0.02
initial state = fromfile_sp4_s-state.vtu
time step = 0.1 # demag refresh interval in ps
duration = 1000 # simulation time in ps
solver type = gpu
external field = 25.0 # Hext in mT
theta_H = 90 # polar angle of the field direction in degree
phi_H = 170 # azimuthal angle
The torque limit
entry ws removed to ensure that the simulation runs for 1 ns, irrespectice of the evolution of the magnetic structure. Moreover, we lowered the time step
value to 0.1 ps, which is genarally a safe choice for low-damping simulations like this one.
The material001.dat
file remains unchanged compared to the one used for the quasistatic calculation of the s-state.
After the simulation, the resulting magnetization dynamics can be analyzed as described before. The image below displays the average \(y\) component of the magnetization \(\langle M_y\rangle/M_s\) as a function of time:

The simulation with the 3 nm mesh yields a result that is already close to that reported by other groups.Better agreement is obtained by lowering the mesh cell size to 1.5 nm. The specification of \(\mu\)MAG Standard Problem #4 requests a representation of a snapshot of the magnetic configuration in the platelet at the moment when the average \(x\) component of the magnetization first crosses the zero line. The value of \(\langle m_x\rangle\) is stored in column #11 of the LOG file. The data shows that the first zero-crossing of \(\langle m_x\rangle\) occurs between \(t=136\) ps and \(t=138\) ps.
...
132.0000 19906.6614 18013.1318 2884.8823 0.0000 -991.3528 0.0000 0.0000 0.0000 3.664e-01 0.08158099865 0.7481174953 -0.1264239146 -0.024620 0.004341 0.000000
134.0000 19811.6267 18414.0172 3004.6501 0.0000 -1607.0406 0.0000 0.0000 0.0000 3.761e-01 0.04951089997 0.7435191546 -0.1294557353 -0.024620 0.004341 0.000000
136.0000 19712.4086 18811.0095 3126.6089 0.0000 -2225.2098 0.0000 0.0000 0.0000 3.851e-01 0.01706620831 0.7375109115 -0.132472287 -0.024620 0.004341 0.000000
138.0000 19608.9898 19203.9153 3249.6787 0.0000 -2844.6042 0.0000 0.0000 0.0000 3.941e-01 -0.01569285578 0.7300725374 -0.1354566535 -0.024620 0.004341 0.000000
140.0000 19501.3684 19592.6391 3372.7034 0.0000 -3463.9740 0.0000 0.0000 0.0000 4.029e-01 -0.04870499786 0.7211918037 -0.1383929318 -0.024620 0.004341 0.000000
...
The closest graphics output we have to this time value is the configuration at \(t=140\) ps:

Ex. 3: Current-induced dynamics
When an electric current flows through a ferromagnetic material, it aquires a spin polarization through which the conduction electrons interact with the magnetization. This effect is described by the so-called spin-transfer torque (STT). In this example, we simulate the STT-induced magnetization dynamics in a square Permalloy platelet. The micromagnetic problem simulated here is based on a proposal by Najafi et al., published in Ref. 1 .
The sample is a Permalloy platelet of 100 nm \(\times\) 100 nm \(\times\) 10 nm size. We use FreeCAD to define the geometry and to generate an irregular FEM mesh with the netgen plugin. In our example, the mesh size is set to 2 nm, resulting in 39 246 irregularly shaped terahedral finite elements.
Calculating the initial configuration
First we simulate the initial configuration – a vortex structure at zero external field and without spin-polarized current. The keyword intial state = vortex_xy
can be used to generate a vortex-type initial structure, circulating in the \(xy\) plane, from which the magnetization can be relaxed.
The simulation.cfg
for this static part of the simulation is
name = sp_stt
scale = 1.e-9
mesh type = vtk
alpha = 1
initial state = vortex_xy
time step = 2 # demag refresh interval in ps
duration = 5000 # simulation time in ps
solver type = gpu
torque limit = 1.e-4
remove precession = yes
By performing the static simulation, we quickly obtain the converged state, stored in the file sp_stt.vtu
, which we rename to sp_stt_vortex.vtu
. This will be the initial state of the subsequent dynamic simulation wit STT effects.

Spin-transfer-torque–driven magnetization dynamics
According to the problem specification, we apply a spatially homogenous dc current density \(j = 10^{12}\) A/m\(^2\) flowing along the positive \(x\) direction. The damping constant is set to \(\alpha=0.1\) and the non-adiabaticity parameter to \(\xi=0.05\). The degree of spin polarization \(P\) is not specified in the article by Nafaji et al. For the lack of a better value, we set \(P=100\)%.
The above-mentioned parameters are specified in the simulation.cfg
file:
name = sp_stt
scale = 1.e-9
mesh type = vtk
alpha = 0.1
initial state = fromfile_sp_stt_vortex.vtu
time step = 0.1
duration = 14000
solver type = gpu
current type = dc
spin polarization = 1.0 # degree of spin polarization in stt dynamics
current density = 1.0 # current density for STT dynamics [in 10^12 A/m^2]
beta = 0.05 # non-adiabaticity parameter in STT term (also known as xi)
current theta = 90 # polar angle of electric current flow [in degrees]
current phi = 0 # azimuthal angle of electric current flow [in degrees]
The dynamic simulation with STT yields a displacement of the vortex along a spiralling orbit, with a radius that decreases in time. The oscillatory convergence towards a new equilibrium state can be recognized in the plots of the spatially averaged \(x\) and \(y\) components of the magnetization as a function of time:

These results are in good agreement with the data reported in Fig. 6 of the article by Najafi et al. 1 . Minor deviations can be attributed to the relatively coarse mesh used in this simulation, and to the fact that tetmag
uses the electron gyromagnetic ratio \(\gamma=2.21276148\times 10^5\) rad m/ As, whereas the problem specification indicates a slightly different value of \(2.211\times 10⁵\) rad m / As.
Other data that the authors of Ref. 1 suggest for comparison are the average \(x\) and \(y\) components of the magnetization of the equilibrium state reached after 14 ns. Our simulation yields \(\langle M_x\rangle = -1.7009\times 10^5\) A/m and \(\langle M_y\rangle = 1.566 \times 10^4\) A/m, which compares fairly well with the data reported in Table 1 of Ref. 1 . Here, too, we expect to obtain better agreement when using smaller sizes.