The package CRYSTAL23 consists of two programs:
computes the energy, analytical gradient and wave function for a given
geometry, which can also be fully optimized.
It also computes a number of energy-based and response quantities, such as vibrational frequencies and spectra, dielectric response,:
computes one electron properties (electrostatic potential, charge density, …)
analyses the wave function in direct and reciprocal space
transforms the Bloch functions (BF) into Wannier functions (localization of BF)
and much more:
crystal sequential execution
crystalOMP sequential execution combined with OpenMP parallelism
Pcrystal replicated data parallel execution
PcrystalOMP replicated data parallel execution combined with OpenMP parallelism
MPPcrystal distributed data parallel execution
MPPcrystalOMP distributed data parallel execution combined with OpenMP parallelism
The following instructions mainly refer to crystal (i.e. sequential execution) but also some information is given about parallel execution and to the use of provided scripts (runcry23, runprop23, runPcry23, runPprop23, runPcry23OMP and runMPPcry23OMP).
properties can read wf information computed by crystal unformatted (file fort.9) or formatted (file fort.98).
Formatted data written in file fort.98 can be moved from one platform to another.
Conventions used in the following:
“ARCH” string to identify the operating system and/or the compiler (Linux-ifort, MacOsX-gnu)
“VERSION” string to identify the crystal version (now v1.0.1)
d12 extension of file meant to be input to crystal
d3 extension of file meant to be input to properties
Installation instructions are given for UNIX/Linux operating systems.
Examples are in C shell.
if not already installed please install csh [run* scripts are written in C Shell]
log in as a generic user, and cd to your home directory
make the crystal root directory - it is called $CRY23_ROOT in the following
change directory to $CRY23_ROOT
After having got the user and passwd, point your browser to:
http://www.crystalsolutions.eu/
Login and then download the executable suitable for your architecture (the name of the files may change, crystal23_v1_0_x, if minor modifications are introduced)
Examples:
crystal23_v1_0_1_Linux-ifort21.4_openmpi4.1.1_exe.tar.gz crystal23_v1_0_1_MacOsX-gnu12.1_openmpi4.1.1_exe.tar.gz
crystal23_v1_0_1_Linux-ifort21.4_openmpi4.1.1_exe.tar.gz
crystal23_v1_0_1_MacOsX-gnu12.1_openmpi4.1.1_exe.tar.gz
gunzip crystal23_v1_0_1_Linux-ifort21.4_openmpi4.1.1_exe.tar.gz gunzip crystal23_v1_0_1_MacOsX-gnu12.1_openmpi4.1.1_exe.tar.gz
gunzip crystal23_v1_0_1_Linux-ifort21.4_openmpi4.1.1_exe.tar.gz
gunzip crystal23_v1_0_1_MacOsX-gnu12.1_openmpi4.1.1_exe.tar.gz
should now have:
crystal23_v1_0_1_Linux-ifort21.4_openmpi4.1.1_exe.tar crystal23_v1_0_1_MacOsX-gnu12.1_openmpi4.1.1_exe.tar
crystal23_v1_0_1_Linux-ifort21.4_openmpi4.1.1_exe.tar
crystal23_v1_0_1_MacOsX-gnu12.1_openmpi4.1.1_exe.tar
tar -xvf crystal23_v1_0_1_Linux-ifort21.4_openmpi4.1.1_exe.tar tar -xvf crystal23_v1_0_1_MacOsX-gnu12.1_openmpi4.1.1_exe.tar
tar -xvf crystal23_v1_0_1_Linux-ifort21.4_openmpi4.1.1_exe.tar
tar -xvf crystal23_v1_0_1_MacOsX-gnu12.1_openmpi4.1.1_exe.tar
ls -l -R
ls -l -R
The result should be something like this:
./bin: total 8 drwxr-xr-x 3 crystal users 4096 ott 18 19:55 Linux-ifort_i64/ drwxr-xr-x 3 crystal users 4096 ott 20 18:57 MacOsx_ARM-gfortran/ ./bin/Linux-ifort_i64: total 4 drwxr-xr-x 2 crystal users 4096 ott 18 19:55 v1.0.1/ ./bin/Linux-ifort_i64/v1.0.1: totale 453352 -rwxr-xr-x 1 crystal users 127482768 ott 18 19:55 crystal -rwxr-xr-x 1 crystal users 127655768 ott 18 19:55 Pcrystal -rwxr-xr-x 1 crystal users 104622352 ott 18 19:55 Pproperties -rwxr-xr-x 1 crystal users 104452240 ott 18 19:55 properties ./bin/MacOsx_ARM-gfortran: total 4 drwxr-xr-x 2 crystal users 4096 ott 12 14:56 v1.0.1 ./bin/MacOsx_ARM-gfortran/v1.0.1: totale 351756 -rwxr-xr-x 1 crystal users 95184196 ott 12 14:56 crystal -rwxr-xr-x 1 crystal users 95325029 ott 12 06:34 Pcrystal -rwxr-xr-x 1 crystal users 84893208 ott 12 06:34 Pproperties -rwxr-xr-x 1 crystal users 84776279 ott 12 14:56 properties
./bin:
total 8
drwxr-xr-x 3 crystal users 4096 ott 18 19:55 Linux-ifort_i64/
drwxr-xr-x 3 crystal users 4096 ott 20 18:57 MacOsx_ARM-gfortran/
./bin/Linux-ifort_i64:
total 4
drwxr-xr-x 2 crystal users 4096 ott 18 19:55 v1.0.1/
./bin/Linux-ifort_i64/v1.0.1:
totale 453352
-rwxr-xr-x 1 crystal users 127482768 ott 18 19:55 crystal
-rwxr-xr-x 1 crystal users 127655768 ott 18 19:55 Pcrystal
-rwxr-xr-x 1 crystal users 104622352 ott 18 19:55 Pproperties
-rwxr-xr-x 1 crystal users 104452240 ott 18 19:55 properties
./bin/MacOsx_ARM-gfortran:
total 4
drwxr-xr-x 2 crystal users 4096 ott 12 14:56 v1.0.1
./bin/MacOsx_ARM-gfortran/v1.0.1:
totale 351756
-rwxr-xr-x 1 crystal users 95184196 ott 12 14:56 crystal
-rwxr-xr-x 1 crystal users 95325029 ott 12 06:34 Pcrystal
-rwxr-xr-x 1 crystal users 84893208 ott 12 06:34 Pproperties
-rwxr-xr-x 1 crystal users 84776279 ott 12 14:56 properties
Testing instructions are given for UNIX/Linux operating systems.
Examples are in C shell.
Point your browser to : https://www.crystal.unito.it/utils/utils23.zip and download the file into the CRYSTAL root directory, $CRY23_ROOT
decompress and untar the file:
gunzip utils23.tar.gz tar -xvf utils23.tar
gunzip utils23.tar.gz
tar -xvf utils23.tar
ls utils23
ls utils23
the result is the list of shell:
runcry23 C shell script to run crystal [and properties] runcry23OMP C shell script to run crystalOMP [and properties] runPcry23 template to prepare a script to run Pcrystal runPcry23OMP template to prepare a script to run PcrystalOMP runMPPcry23 template to prepare a script to run MPPcrystal runMPPcry23OMP template to prepare a script to run MPPcrystalOMP runprop23 C shell script to run properties runPprop23 template to prepare a script to run Pproperties cry23.cshrc C shell to define CRYSTAL23 environmental variables cry23.bashrc bash to define CRYSTAL23 environmental variables
runcry23 C shell script to run crystal [and properties]
runcry23OMP C shell script to run crystalOMP [and properties]
runPcry23 template to prepare a script to run Pcrystal
runPcry23OMP template to prepare a script to run PcrystalOMP
runMPPcry23 template to prepare a script to run MPPcrystal
runMPPcry23OMP template to prepare a script to run MPPcrystalOMP
runprop23 C shell script to run properties
runPprop23 template to prepare a script to run Pproperties
cry23.cshrc C shell to define CRYSTAL23 environmental variables
cry23.bashrc bash to define CRYSTAL23 environmental variables
cd utils23
cd utils23
and:
chmod +x run*
chmod +x run*
Backup the cry23.cshrc file by copying as cry23.old.cshrc
mv cry23.cshrc cry23.old.cshrc
mv cry23.cshrc cry23.old.cshrc
Edit the cry23.cshrc shell to define the local value of the environmental variables:
Variable Name | Meaning | name used in the example |
---|---|---|
CRY23_ROOT | CRYSTAL23 root directory | CRYSTAL23 |
CRY23_BIN | binary directory | bin |
CRY23_ARCH | ARCH string to identify the executable | Linux-ifort_i64 |
VERSION | CRYSTAL23 version | v1.0.1 |
CRY23_SCRDIR | temporary directory for scratch files | $HOME |
source cry23.cshrc
source cry23.cshrc
source cry23.cshrc (or source cry23.bashrc)
source cry23.cshrc (or source cry23.bashrc)
N.B. As is, every time you open a terminal the variables defined in the cry23.cshrc or cry23.bashrc will be printed. If you would like to disable this feature please comment every echo
in the file, prefixing a hash sign.
In order to run crystal and properties all previous steps should have been taken successfully.
ls
ls
to check your installation. The result should be:
bin/ directory with executables`` utils23/ direcyory with utilities
bin/ directory with executables``
utils23/ direcyory with utilities
make the directory test_cases and move into it
download input test cases from:
http://www.crystal.unito.it/test_cases/inputs_wf.tar.gz
For tests on geometry optimization and vibrational frequencies calculation see the CRYSTAL tutorials home page:
http://www.crystal.unito.it/tutorials/index.html
or move to http://www.crystal.unito.it ==> documentation ==> test cases and download input files.
gunzip *.gz tar -xvf inputs.......tar.
gunzip *.gz
tar -xvf inputs.......tar.
From the directory $CRY23_ROOT/test_cases type the command:
ls -F
ls -F
you should find the following directories:
inputs/ directory with CRYSTAL23 inputs to crystal (*.d12) and properties (*.d3)
inputs/ directory with CRYSTAL23 inputs to crystal (*.d12) and
properties (*.d3)
crystal_root -----bin---ARCH1----VERSION1----crystal[,Pcrystal] | | properties[,Pproperties] | ARCH2----VERSION1----crystalOMP[,PcrystalOMP] | properties[,Pproperties] test_cases---inputs --- test01.d12 | | test02.d12 | | ......... | |--outputs--- test01.out | | test02.out | | . . . . . | | |test_xxxx|---inputs . . . . | | | |---outputs . . . . . . . . . . .. . . . utils23-----cry23.cshrc | cry23.bashrc | runcry23 | runcry23OMP | runPcry23 | runPcry23OMP | runMPPcry23 | runMPPcry23OMP | runprop23 | runPprop23
crystal_root -----bin---ARCH1----VERSION1----crystal[,Pcrystal]
| | properties[,Pproperties]
| ARCH2----VERSION1----crystalOMP[,PcrystalOMP]
| properties[,Pproperties]
test_cases---inputs--- test01.d12
| | test02.d12
| | .........
| |---outputs--- test01.out
| | test02.out
| | . . . . .
| |
|test_xxxx|---inputs . . . .
| |
| |---outputs . . . .
. . . . . . .. . . .
utils23-----cry23.cshrc
| cry23.bashrc
| runcry23
| runcry23OMP
| runPcry23
| runPcry23OMP
| runMPPcry23
| runMPPcry23OMP
| runprop23
| runPprop23
To test the program with the test cases inputs supplied, make from $CRY23_ROOT the directory test:
mkdir test_first
mkdir test_first
In order for crystal and properties to read input file from the test dataset you should set these two variables:
setenv CRY23_INP "$CRY23_ROOT/test_cases/inputs" setenv CRY23_PROP "$CRY23_ROOT/test_cases/inputs"
setenv CRY23_INP "$CRY23_ROOT/test_cases/inputs"
setenv CRY23_PROP "$CRY23_ROOT/test_cases/inputs"
or (bash)
export CRY23_INP=$CRY23_ROOT/test_cases/inputs export CRY23_PROP=$CRY23_ROOT/test_cases/inputs
export CRY23_INP=$CRY23_ROOT/test_cases/inputs
export CRY23_PROP=$CRY23_ROOT/test_cases/inputs
If you do not set the two variables, input and output are supposed to be into the current directory.
Test11 is MgO bulk, and provides also data for the properties program.
runcry23 test11
runcry23 test11
The program crystal and properties are executed.
In the current directory the following files will be written:
test11.out standard output (crystal+properties) test11.f9 unformatted wf data (written by crystal) test11.f98 formatted wf data (written by crystal)
test11.out standard output (crystal+properties)
test11.f9 unformatted wf data (written by crystal)
test11.f98 formatted wf data (written by crystal)
To check the execution, issue the command to find the string “GY(HF” (see CRYSTAL23 User’s Manual, Appendix, “Relevant Strings”):
grep "GY(HF" test11.out
grep "GY(HF" test11.out
the correct answer should be:
TOTAL ENERGY(HF)(AU)( 6) -2.7466419186151E+02 DE 1.7E-10 tst 2.2E-09 PX 3.3E-04
TOTAL ENERGY(HF)(AU)( 6) -2.7466419186151E+02 DE 1.7E-10 tst 2.2E-09 PX 3.3E-04
That string contains the total energy/cell of bulk MgO.
Band structure, Density of states, charge density maps are also computed by "properties"
test11.f25 formatted charge density maps data (written by properties) test11_dat.DOSS formatted DOSS data (written by properties) test11_dat.BAND formatted BAND data (written by properties)
test11.f25 formatted charge density maps data (written by properties)
test11_dat.DOSS formatted DOSS data (written by properties)
test11_dat.BAND formatted BAND data (written by properties)
They can be visualized by using CRYSPLOT, a web-oriented visualization tool to plot different properties. It can be used directly from the web browser by accessing to https://crysplot.crystalsolutions.eu
Crystal provides several scripts such as runPcry23, runPcry23OMP, runMPPcry23, runMPPcry23OMP and runPpro23 useful for the parallel execution of the Pcrystal, MPPcrystal and Pproperties executables, respectively. If your system adopts a queue system, like SLURM, you might need to write a specific script for the queueing system. Nevertheless, these scripts can be useful as a guide.
Only a few parts might need customisation in the runPcry23 and runPprop23 scripts.
Let us see in detail the case of runPcry23, runPprop23 works in a completely analogous way.
The first point to be inspected can be found around line 200 of runPcry23 script :
set MPIDIR = /replace/this/line/with/your/own/mpibin/directory set MPIBIN = mpirun
set MPIDIR = /replace/this/line/with/your/own/mpibin/directory
set MPIBIN = mpirun
here the user must point to the specific Open MPI library installed on his/her system. This generally has to be the same distribution and version used to build the executables (be it the distributed executable or the one compiled in loco from the precompiled objects).
The Job launching line:
${MPIDIR}/${MPIBIN} -machinefile $CRY23P_MACH/machines.LINUX -np $NPROCS $TMPDIR/Pcrystal < $TMPDIR/INPUT >>& $OUTDIR/$OUTFILE
${MPIDIR}/${MPIBIN} -machinefile $CRY23P_MACH/machines.LINUX -np $NPROCS $TMPDIR/Pcrystal < $TMPDIR/INPUT >>& $OUTDIR/$OUTFILE
that is found around line 460 of the script, does not generally need to be customised but the user might have to do it, according to specific Open MPI installations.
The next step is to provide two files, situated in the directory defined by the $CRY23P_MACH environment variable, which contain the list of nodes involved.
The first, called machines.LINUX (if the name has to be changed, it must be changed also in the command line above) contains the list of computing nodes where the job will run. According to the specific dialect of the used MPI implementation, additional options (such as the maximum number of cores per node to be used) can be provided.
In the case when the scratch disk is not shared among nodes, the nodes.par file must contain a similar list as above, plus the hostname of the launching node (if the launching node is not a computing node). This file is used by the script to create temporary directories in all the nodes and copy via ssh (scp command) the needed files (input, executables, restart units, wavefunction, output units) to and from the launching directory and the scratch directories on the nodes. Note that permissions to create such directory and access it must be granted on all nodes included the launching node.
If the system features shared a disk, then the nodes.par file can contain only one line.
 
Overview
The basic parallel versions of CRYSTAL modules, denoted as Pcrystal and Pproperties, uses a replicated data algorithm. Each host runs the same code and performs a number of independent tasks which are distributed at run time. One host is chosen as the master. The master host spawns the program onto other hosts (slaves) and operates dynamical load balancing of the task execution via a shared atomic counter. During integral generation a task is defined as the calculation of a block of integrals. Thus each node computes a number of integrals which are stored to its local disk.
During a SCF cycle, a partial Hamiltonian matrix (F) is built on each node from those integrals which have been stored locally. The matrices are then passed between nodes so that each has a complete copy. The diagonalization of F at each k-point is treated as an independent task which is distributed. After diagonalization the eigenvalues are communicated to all nodes.
This strategy is comparatively easy to implement and successful on architectures where each node has access to fast disk storage and sufficient memory to run a complete copy of CRYSTAL. Low speed communication hardware (such as Ethernet) is usually sufficient. Performance depends critically on the system considered.
The integral generation step is performed efficiently when the number of integrals to be generated is much larger than the number of nodes. This condition is satisfied in most applications. Machines with up to 64 nodes have been used effectively on large cases. In the SCF process the construction of F is also efficient. Diagonalization of F is performed efficiently if the number of k-points is much larger than the number of nodes. This condition is usually not satisfied for large systems and thus diagonalization may be the most costly phase.
The parallel version of CRYSTAL requires a mechanism for initiating processes on remote machines and a library of routines to provide inter-process communication. There are many implementations of this functionality available and CRYSTAL has been modified to take advantage of the MPI message-passing library
Running the MPI parallel version of CRYSTAL under Linux
The CRYSTAL23 parallel executables for Linux (Pcrystal, PcrystalOMP,Pproperties) are based on OpenMPI implementation of the MPI message-passing library and have been generated with the following features:
The CRYSTAL23 parallel version is supposed to run on homogeneous workstation networks, Beowulf cluster and individual workstations.
To run the MPI parallel version of CRYSTAL23 under Linux special attention must be paid to set the proper environment:
Workstation clusters require each process in a parallel job be started individually.
The procedure to run CRYSTAL23 can then be summarized as:
machines.arch
, where arch
is the architecture of the system (e.g. LINUX) and it can be located in the working directory or in the directory. The format is one hostname per line, with either hostname
or hostname:n
, where n
is the number of processors in a cluster of symmetric multiprocessors. The hostname
should be the same as the result from the command “hostname
”.#
node9
node10
node11
node12
node13
node14
node9
).run mpirun
as:mpirun -np nprocs -machinefile machines.arch Pcrystal
machines.arch
, located in the working directory. According to the list of nodes above, if nprocs=4
, the program will run on: node9
, node10
, node11
and node12
.The output file will be displayed on the standard error. Use common Unix commands for redirecting stderr
to a file.
Note that scripts are available to run Pcrystal and Pproperties (see: http://www.crystal.unito.it/utils/utils23.zip)
 
Due to many flavours of Linux and MPI libraries it is unsafe to distribute a self-contained executable as differences in the system libraries devoted to handle parallel executions as well as in the available mathematical libraries prevent to build a general executable. These instructions allow a user to build the CRYSTAL23 executables starting from pre-compiled objects modules by compiling just the system dependent parts. In the following we will refer to Pcrystal and PcrystalOMP for the parallel version of CRYSTAL23 which run on replicated data and to MPPcrystal and MPPcrystalOMP for the massively parallel version which relies on highly optimized standard routine to handle matrix operations over thousand processors.
CRYSTAL23 dependencies
Pcrystal and MPPcrystal run both over MPI so that the user needs to install an Open MPI distribution on the cluster or refer to the one already present in his system.
Additionally, PcrystalOMP, MPPcrystal and MPPcrystalOMP depend on BLAS, LAPACK and SCALAPACK libraries (e.g. from Intel MKL libraries). Thus, the user needs to install these libraries or ask to his manager system for these.
mkdir CRYSTAL23
cp crystal23_v1_0_1_Linux-ifort21.4_Pdistrib.tar.gz CRYSTAL23/.
cd CRYSTAL23
tar -zxvf crystal23_v1_0_1_Linux-ifort21.4_Pdistrib.tar.gz
cd build
cd Xmakes
For Linux systems using Intel Fortran OneAPI Compiler
F90 = mpif90 LD = $(F90) PLD = mpif90 F90COMMON = -stand=f08 -diag-disable 7373 -diag-error=5198,6182,6893,6916,6919,7374,7416,7423,8089,8586 \ -align -static-intel -cxxlib F90FLAGS = $(F90COMMON) -O3 -march=core-avx2 F90BASIS = $(F90COMMON) -O0 F90GORB = $(F90COMMON) -O2 F90DENS = $(F90COMMON) -O2 F90FIXED = -FI F90FREE = -FR SAVEMOD = -module $(MODDIR) INCMOD = -I$(MODDIR) LDFLAGS = $(F90FLAGS) LDLIBS = $(LIBXCFUN) -lm MXMB = $(OBJDIR)/libmxm.o MACHINE_C=mach_linux CC = icc CFLAGS = -O2 -vec-report0 -Wall -diag-disable 177,279,383,869,981,1418,1419,1572 -DNDEBUG CXX = icpc CXXFLAGS = $(CFLAGS) -fno-rtti -fno-exceptions # MPI harness HARNESS = $(MPI)
F90 = mpif90
LD = $(F90)
PLD = mpif90
F90COMMON = -stand=f08 -diag-disable 7373 -diag-error=5198,6182,6893,6916,6919,7374,7416,7423,8089,8586 \
-align -static-intel -cxxlib
F90FLAGS = $(F90COMMON) -O3 -march=core-avx2
F90BASIS = $(F90COMMON) -O0
F90GORB = $(F90COMMON) -O2
F90DENS = $(F90COMMON) -O2
F90FIXED = -FI
F90FREE = -FR
SAVEMOD = -module $(MODDIR)
INCMOD = -I$(MODDIR)
LDFLAGS = $(F90FLAGS)
LDLIBS = $(LIBXCFUN) -lm
MXMB = $(OBJDIR)/libmxm.o
MACHINE_C=mach_linux
CC = icc
CFLAGS = -O2 -vec-report0 -Wall -diag-disable 177,279,383,869,981,1418,1419,1572 -DNDEBUG
CXX = icpc
CXXFLAGS = $(CFLAGS) -fno-rtti -fno-exceptions
# MPI harness
HARNESS = $(MPI)
cd …
make all
mkdir CRYSTAL23
cp crystal23_v1_0_1_Linux-ifort21.4_PdistribOMP.tar.gz CRYSTAL23/.
cd CRYSTAL23
tar -zxvf crystal23_v1_0_1_Linux-ifort21.4_PdistribOMP.tar.gz
cd build
cd Xmakes
For Linux systems using Intel Fortran OneAPI Compiler and OpenMP parallelism
F90 = mpif90 LD = $(F90) PLD = mpif90 F90COMMON = -stand=f08 -diag-disable 7373 -diag-error=5198,6182,6893,6916,6919,7374,7416,7423,8089,8586 \ -align -static-intel -cxxlib -qopenmp F90FLAGS = $(F90COMMON) -O3 -march=core-avx2 F90BASIS = $(F90COMMON) -O0 F90GORB = $(F90COMMON) -O2 F90DENS = $(F90COMMON) -O2 F90FIXED = -FI F90FREE = -FR SAVEMOD = -module $(MODDIR) INCMOD = -I$(MODDIR) LDFLAGS = $(F90FLAGS) EIGENV = $(OBJDIR)/diag_lapack.o MATMULT = $(OBJDIR)/mult_blas.o MKLPATH = $(MKLROOT)/lib/intel64 LDLIBS = $(LIBXCFUN) -Wl,--start-group \ $(MKLPATH)/libmkl_intel_lp64.a $(MKLPATH)/libmkl_intel_thread.a \ $(MKLPATH)/libmkl_core.a $(MKLPATH)/libmkl_blacs_openmpi_lp64.a -Wl,--end-group -liomp5 -lpthread -lm -ldl MXMB = $(OBJDIR)/libmxm.o MACHINE_C=mach_linux CC = icc CFLAGS = -O2 -vec-report0 -Wall -diag-disable 177,279,383,869,981,1418,1419,1572 -DNDEBUG CXX = icpc CXXFLAGS = $(CFLAGS) -fno-rtti -fno-exceptions # MPI harness HARNESS = $(MPI) # https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-link-line-advisor.html
F90 = mpif90
LD = $(F90)
PLD = mpif90
F90COMMON = -stand=f08 -diag-disable 7373 -diag-error=5198,6182,6893,6916,6919,7374,7416,7423,8089,8586 \
-align -static-intel -cxxlib -qopenmp
F90FLAGS = $(F90COMMON) -O3 -march=core-avx2
F90BASIS = $(F90COMMON) -O0
F90GORB = $(F90COMMON) -O2
F90DENS = $(F90COMMON) -O2
F90FIXED = -FI
F90FREE = -FR
SAVEMOD = -module $(MODDIR)
INCMOD = -I$(MODDIR)
LDFLAGS = $(F90FLAGS)
EIGENV = $(OBJDIR)/diag_lapack.o
MATMULT = $(OBJDIR)/mult_blas.o
MKLPATH = $(MKLROOT)/lib/intel64
LDLIBS = $(LIBXCFUN) -Wl,--start-group \
$(MKLPATH)/libmkl_intel_lp64.a $(MKLPATH)/libmkl_intel_thread.a \
$(MKLPATH)/libmkl_core.a $(MKLPATH)/libmkl_blacs_openmpi_lp64.a -Wl,--end-group -liomp5 -lpthread -lm -ldl
MXMB = $(OBJDIR)/libmxm.o
MACHINE_C=mach_linux
CC = icc
CFLAGS = -O2 -vec-report0 -Wall -diag-disable 177,279,383,869,981,1418,1419,1572 -DNDEBUG
CXX = icpc
CXXFLAGS = $(CFLAGS) -fno-rtti -fno-exceptions
# MPI harness
HARNESS = $(MPI)
# https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-link-line-advisor.html
The user should specify the following paths:
MKLPATH
; directory where the MKL libraries have been installed.
6. Return to the build directory
cd …
7. type
make all
8. the executable crystalOMP, properties, PcrystalOMP and Pproperties will be written in ~/CRYSTAL23/bin/Linux-ifort_i64_omp/v1.0.1
cd CRYSTAL23
tar -zxvf crystal23_v1_0_1_Linux-ifort21.4_MPPdistrib.tar.gz
cd build
cd Xmakes
In the following examples we will refer to the Intel Fortran OneAPI compiler.
For the case in which all libraries are provided by MKL the inc file looks like:
For Linux systems using Intel Fortran OneAPI Compiler
F90 = mpif90 LD = $(F90) PLD = mpif90 F90COMMON = -stand=f08 -diag-disable 7373 -diag-error=5198,6182,6893,6916,6919,7374,7416,7423,8089,8586 \ -align -static-intel -cxxlib F90FLAGS = $(F90COMMON) -O3 -march=core-avx2 F90BASIS = $(F90COMMON) -O0 F90GORB = $(F90COMMON) -O2 F90DENS = $(F90COMMON) -O2 F90FIXED = -FI F90FREE = -FR SAVEMOD = -module $(MODDIR) INCMOD = -I$(MODDIR) LDFLAGS = $(F90FLAGS) LDLIBS = $(LIBXCFUN) -lm MXMB = $(OBJDIR)/libmxm.o MACHINE_C=mach_linux CC = icc CFLAGS = -O2 -vec-report0 -Wall -diag-disable 177,279,383,869,981,1418,1419,1572 -DNDEBUG CXX = icpc CXXFLAGS = $(CFLAGS) -fno-rtti -fno-exceptions # MPI harness HARNESS = $(MPI) # https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-link-line-advisor.html MKL=$(MKLROOT)/lib/intel64 MPPLIB=-L$(MKL) $(MKL)/libmkl_scalapack_lp64.a -Wl,--start-group \ $(MKL)/libmkl_intel_lp64.a $(MKL)/libmkl_sequential.a \ $(MKL)/libmkl_core.a $(MKL)/libmkl_blacs_openmpi_lp64.a -Wl,--end-group -lpthread -lm -ldl
F90 = mpif90
LD = $(F90)
PLD = mpif90
F90COMMON = -stand=f08 -diag-disable 7373 -diag-error=5198,6182,6893,6916,6919,7374,7416,7423,8089,8586 \
-align -static-intel -cxxlib
F90FLAGS = $(F90COMMON) -O3 -march=core-avx2
F90BASIS = $(F90COMMON) -O0
F90GORB = $(F90COMMON) -O2
F90DENS = $(F90COMMON) -O2
F90FIXED = -FI
F90FREE = -FR
SAVEMOD = -module $(MODDIR)
INCMOD = -I$(MODDIR)
LDFLAGS = $(F90FLAGS)
LDLIBS = $(LIBXCFUN) -lm
MXMB = $(OBJDIR)/libmxm.o
MACHINE_C=mach_linux
CC = icc
CFLAGS = -O2 -vec-report0 -Wall -diag-disable 177,279,383,869,981,1418,1419,1572 -DNDEBUG
CXX = icpc
CXXFLAGS = $(CFLAGS) -fno-rtti -fno-exceptions
# MPI harness
HARNESS = $(MPI)
# https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-link-line-advisor.html
MKL=$(MKLROOT)/lib/intel64
MPPLIB=-L$(MKL) $(MKL)/libmkl_scalapack_lp64.a -Wl,--start-group \
$(MKL)/libmkl_intel_lp64.a $(MKL)/libmkl_sequential.a \
$(MKL)/libmkl_core.a $(MKL)/libmkl_blacs_openmpi_lp64.a -Wl,--end-group -lpthread -lm -ldl
The user should specify the MKL path:
MKL
; directory where the MKL libraries have been installed.
cd …
make MPP
make all
cd CRYSTAL23
tar -zxvf crystal23_v1_0_1_Linux-ifort21.4_MPPdistribOMP.tar.gz
cd build
cd Xmakes
In the following examples we will refer to the Intel Fortran OneAPI compiler.
For the case in which all libraries are provided by MKL the inc file looks like:
For Linux systems using Intel Fortran OneAPI Compiler and OpenMP parallelism
F90 = mpif90 LD = $(F90) PLD = mpif90 F90COMMON = -stand=f08 -diag-disable 7373 -diag-error=5198,6182,6893,6916,6919,7374,7416,7423,8089,8586 \ -align -static-intel -cxxlib -qopenmp F90FLAGS = $(F90COMMON) -O3 -march=core-avx2 F90BASIS = $(F90COMMON) -O0 F90GORB = $(F90COMMON) -O2 F90DENS = $(F90COMMON) -O2 F90FIXED = -FI F90FREE = -FR SAVEMOD = -module $(MODDIR) INCMOD = -I$(MODDIR) LDFLAGS = $(F90FLAGS) EIGENV = $(OBJDIR)/diag_lapack.o MATMULT = $(OBJDIR)/mult_blas.o MKLPATH = $(MKLROOT)/lib/intel64 LDLIBS = $(LIBXCFUN) -Wl,--start-group \ $(MKLPATH)/libmkl_intel_lp64.a $(MKLPATH)/libmkl_intel_thread.a \ $(MKLPATH)/libmkl_core.a $(MKLPATH)/libmkl_blacs_openmpi_lp64.a -Wl,--end-group -liomp5 -lpthread -lm -ldl MXMB = $(OBJDIR)/libmxm.o MACHINE_C=mach_linux CC = icc CFLAGS = -O2 -vec-report0 -Wall -diag-disable 177,279,383,869,981,1418,1419,1572 -DNDEBUG CXX = icpc CXXFLAGS = $(CFLAGS) -fno-rtti -fno-exceptions # MPI harness HARNESS = $(MPI) # https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-link-line-advisor.html MKL=$(MKLROOT)/lib/intel64 MPPLIB=-L$(MKL) $(MKL)/libmkl_scalapack_lp64.a -Wl,--start-group \ $(MKL)/libmkl_intel_lp64.a $(MKL)/libmkl_sequential.a \ $(MKL)/libmkl_core.a $(MKL)/libmkl_blacs_openmpi_lp64.a -Wl,--end-group -liomp5 -lpthread -lm -ldl
F90 = mpif90
LD = $(F90)
PLD = mpif90
F90COMMON = -stand=f08 -diag-disable 7373 -diag-error=5198,6182,6893,6916,6919,7374,7416,7423,8089,8586 \
-align -static-intel -cxxlib -qopenmp
F90FLAGS = $(F90COMMON) -O3 -march=core-avx2
F90BASIS = $(F90COMMON) -O0
F90GORB = $(F90COMMON) -O2
F90DENS = $(F90COMMON) -O2
F90FIXED = -FI
F90FREE = -FR
SAVEMOD = -module $(MODDIR)
INCMOD = -I$(MODDIR)
LDFLAGS = $(F90FLAGS)
EIGENV = $(OBJDIR)/diag_lapack.o
MATMULT = $(OBJDIR)/mult_blas.o
MKLPATH = $(MKLROOT)/lib/intel64
LDLIBS = $(LIBXCFUN) -Wl,--start-group \
$(MKLPATH)/libmkl_intel_lp64.a $(MKLPATH)/libmkl_intel_thread.a \
$(MKLPATH)/libmkl_core.a $(MKLPATH)/libmkl_blacs_openmpi_lp64.a -Wl,--end-group -liomp5 -lpthread -lm -ldl
MXMB = $(OBJDIR)/libmxm.o
MACHINE_C=mach_linux
CC = icc
CFLAGS = -O2 -vec-report0 -Wall -diag-disable 177,279,383,869,981,1418,1419,1572 -DNDEBUG
CXX = icpc
CXXFLAGS = $(CFLAGS) -fno-rtti -fno-exceptions
# MPI harness
HARNESS = $(MPI)
# https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-link-line-advisor.html
MKL=$(MKLROOT)/lib/intel64
MPPLIB=-L$(MKL) $(MKL)/libmkl_scalapack_lp64.a -Wl,--start-group \
$(MKL)/libmkl_intel_lp64.a $(MKL)/libmkl_sequential.a \
$(MKL)/libmkl_core.a $(MKL)/libmkl_blacs_openmpi_lp64.a -Wl,--end-group -liomp5 -lpthread -lm -ldl
The user should specify the following paths:
MKLPATH
and MKL
; directories where the MKL libraries have been installed.
cd …
make MPP
make all