Making an overview presentation of the scaling relations

The following video-presentation – for the CHEAC Summer school 2025 – retells our review on the scaling relations electrocatalysis https://chemrxiv.org/engage/chemrxiv/article-details/67ed469081d2151a02b33a98

The final video

From the beginning I decided to try AI to prepare the presentation. Eventually the only to record the video turned out to be by the traditional way. Together with co-authors Ritums and Nadezda, we used PowerPoint with its slide-by-slide recording feature. As we were in 3 different locations, we exchanged the presentation several time while recording. I used chatgpt 4o and 5 to write lecturer’s notes for every slide. In particular, I gave the chat our article’s pdf-file and then discussed every slide-text using canvas-feature to polish it iteratively. Nadezda also used chatgpt to refined her slides before reading them aloud. Overall, I have spend over two weeks planning the presentation. Then a week polishing the slides. Then several days to record and re-record slides. And finally I have got this the final video-presentation.

Adding voice to a ready presentation

app.pictory.ai does a relatively good job on reading the lecturer’s notes in a ready presentation. Thought, it reads “Jan” and “OOH” in a funny way. And it adds a lot of 10–20 second pauses. Also the slide numbering is off as well as all animation. The picture is also cut from below. But overall, it takes around 2 hours to generate this voiced video and process it.

Use NotebookLM to make a podcast

In the prompt I have specified to avoid banned tells, see https://doublelayer.eu/vilab/2024/12/17/list-of-banned-tells-for-gpt/ Well, I have forbidden to use “pivotal”, but AI still uses “pivotal”.

I am not responsible for the result 🙂 I have heard it and it sounds OK-ish.

Using Gemini in Google Slides

Does not work for me. Gemini wants to draw images. I just want to enter my own figures.

All I need is to convert Figures to Slides

https://www.magicslides.app promises to do exactly that but I failed with a notice that below 5 Mb files are allowed.

SlideAI extension also does not do what I want.

Ufff … manual upload is still the fastest and most robust. Well, it is not so simple, as most of my figures are in pdf, so I wrote this script to convert everything to png. When it took me 2 mins to drag-and-drop all png figure to my presentation. Hurray!

#!/bin/bash

# Create output folder
mkdir -p png

# List of input files
files=(
"Figure 1 mechanisms.png"
"Figure 18 Timeline.png"
"FIgure 14 distances.pdf"
"Figure 11 relative.pdf"
"Figure 6 3dvolcano_withscaling.pdf"
"Figure 2 publications.pdf"
"Figure 5 3dvolcano.pdf"
"Figure 17 perspectives.pdf"
"Figure 16 O_bypassing.pdf"
"Figure 15 O_pushing.pdf"
"Figure 12 O_breaking.pdf"
"Figure 13 O_switching.png"
"Figure 10 O_tuning.pdf"
"Figure 7 projection_potential.pdf"
"Figure 9 projection_ads.pdf"
"Figure 8 timeline.pdf"
"Figure 3 ass_diss.png"
"Figure 4 scalings.png"
)

# Loop through files
for f in "${files[@]}"; do
  base=$(basename "$f")
  name="${base%.*}"
  ext="${base##*.}"
  
  if [[ "$ext" == "pdf" ]]; then
    convert -density 300 "$f" -quality 100 "png/${name}.png"
  elif [[ "$ext" == "png" ]]; then
    cp "$f" "png/${name}.png"
  else
    echo "Unsupported file type: $f"
  fi
done

Use NotebookLM to create FAQ

Pretty cool – NotebookLM make a FAQ.

What are scaling relations in electrocatalysis, and why are they important?

Scaling relations are correlations between the adsorption energies of reaction intermediates on a catalyst’s surface. They are crucial in multi-step electrocatalytic reactions, such as the oxygen reduction reaction (ORR), carbon dioxide reduction (CO2R), and nitrogen reduction (N2RR). The concept emerged in 2005 with the discovery of linear relations between adsorption energies of intermediates like OH, OOH, and O on metal surfaces. Understanding these relations is vital because they define fundamental chemical limitations in electrocatalytic reactions, impacting the design of more efficient catalysts for energy conversion technologies like electrolysers, fuel cells, and metal-air batteries.

How do scaling relations limit the efficiency of oxygen electrocatalysis?

In oxygen electrocatalysis, particularly the oxygen reduction reaction (ORR), the adsorption energies of key intermediates (OOH, OH, O) are correlated by scaling relations. These correlations constrain the achievable catalytic activity, often visualised on “volcano plots.” The OOH-OH and O-OH scaling relations, for instance, mean that if a catalyst binds one intermediate optimally, it might bind another too strongly or too weakly, preventing it from reaching the ideal catalytic activity (the “volcano top”). This limitation is significant, as experimental results have shown catalytic overpotentials converging to a limit set by these relations for over two decades, hindering progress in sustainable energy solutions.

What are the main reaction mechanisms in oxygen electrocatalysis, and how does catalyst geometry influence them?

Oxygen electrocatalysis primarily proceeds via two mechanisms: associative and dissociative. The associative mechanism, which dominates most known catalysts, involves intermediates like OOH, OH, and O adsorbing at a single active site. Geometrically, this requires only one atom in the active site. The dissociative mechanism, conversely, requires at least two neighbouring atoms to accommodate dissociation products (O and OH). On metal surfaces, a spatial mismatch often prevents the dissociative mechanism, as O preferentially adsorbs on hollow sites and OH on top sites. However, dual-atom site catalysts (DACs) can facilitate dissociative pathways by providing two adjacent sites, allowing for the adsorption of dissociation products. The inter-atomic distance within these active sites is a critical geometric parameter that influences the energy barrier for dissociation, balancing thermodynamics and kinetics.

What is the “volcano plot” in electrocatalysis, and how do scaling relations affect it?

The “volcano plot” is a theoretical framework used to understand electrocatalysis, typically representing overpotential or activity as an “altitude” against adsorption energy descriptors. For ORR, it correlates adsorption energies with deviations from the thermodynamic equilibrium potential. Scaling relations define the “paths” or “fixed climbing routes” on this volcano plot that are accessible to catalysts. For example, the OOH-OH scaling relation appears as a plane on the three-dimensional volcano, and catalysts following this relation are confined to a specific line on the volcano’s surface. This means that while an “ideal catalyst” (the volcano’s apex) might exist theoretically, scaling relations prevent most catalysts from reaching it, limiting the search for optimal catalysts to a two-dimensional projection.

What are the five general strategies for “manipulating” scaling relations in electrocatalysis?

The review outlines five general strategies for manipulating scaling relations to enhance electrocatalytic performance:

  1. Tuning: Adjusting the adsorption energy of a key intermediate (e.g., ∆GOH) to optimise catalyst performance within the constraints of an existing scaling relation, adhering to the Sabatier principle.
  2. Breaking: Decreasing the intercept (β) of a scaling relation by selectively stabilising one intermediate over another (e.g., OOH relative to OH), often by introducing spectator groups that induce stabilising interactions.
  3. Switching: Changing the slope (α) of a scaling relation by enabling an alternative reaction mechanism (e.g., switching from an associative to a dissociative mechanism in ORR) to avoid problematic intermediates. This usually requires dual active sites.
  4. Pushing: A combined strategy that changes the slope and adjusts the intercept, simultaneously switching to an alternative mechanism and using stabilising interactions (similar to breaking).
  5. Bypassing: Completely decoupling adsorption energies by switching between two distinct states of the catalyst (e.g., geometric or electronic) during the reaction cycle, with each state having optimal adsorption energies for specific intermediates. This strategy aims to eliminate all scaling relation constraints.

How does the “breaking” strategy specifically aim to overcome the OOH-OH scaling relation?

The “breaking” strategy focuses on reducing the intercept of the OOH-OH scaling relation (from approximately 3.2 eV to an ideal value of 2.46 eV) by selectively stabilising the OOH intermediate relative to OH. This typically involves introducing spectator groups or a second adsorption site near the active site. These spectators can form hydrogen bonds or other stabilising interactions with OOH, effectively shifting its adsorption energy without proportionally affecting OH. While challenging to achieve experimentally, this strategy has been demonstrated in oxygen evolution reactions (OER) and more recently in ORR using dual-atom catalysts (DACs) with specific active sites like PN3FeN3, where the phosphorus acts as a spectator to stabilise OOH through hydrogen bonding.

What role do Single-Atom Site Catalysts (SACs) and Dual-Atom Site Catalysts (DACs) play in manipulating scaling relations?

Single-Atom Site Catalysts (SACs) and Dual-Atom Site Catalysts (DACs) are crucial in manipulating scaling relations due to their distinct geometric and electronic properties. SACs typically allow for “on-top” adsorption, primarily favouring the associative mechanism in ORR. DACs, with their two neighbouring active sites, offer the possibility of accommodating two dissociation products simultaneously, thereby enabling the dissociative mechanism. This ability to switch mechanisms is key to the “switching” strategy, where DACs can replace the OOH intermediate with two distinct O and OH intermediates adsorbed at separate sites. Furthermore, the precise control over inter-atomic distances and curvature in DACs allows for fine-tuning of electronic structures and promoting specific interactions (like hydrogen bonding), contributing to “breaking” and “pushing” strategies.

What is the ultimate goal of manipulating scaling relations, and how does the “bypassing” strategy contribute to this vision?

The ultimate goal of manipulating scaling relations is to achieve ideal catalyst performance, ideally with zero overpotential, by overcoming the fundamental limitations imposed by these correlations. The “bypassing” strategy represents the most ambitious approach towards this goal. It seeks to completely decouple the adsorption energies of reaction intermediates by allowing the catalyst to switch between two or more distinct states (e.g., geometric, electronic, or photonic) during the reaction cycle. Each state would be optimally configured to bind specific intermediates at the ideal energy values required for efficient catalysis. While seemingly challenging in practice, this concept, inspired by natural enzymes like cytochrome c oxidase, offers a theoretical pathway to eliminate all scaling constraints and achieve the theoretical apex of the volcano plot, pushing the boundaries of what is currently achievable in electrocatalysis.

Uniting “simulants” over pizza

This semester I am co-organising a seminar on computer simulations (3 ECTS, LOTI.05.076). One of the aim is to gather and unite researchers from different institutes. Our common topic is using computers in research, so we are “simulants”, i.e. simulating reality via calculations. Some of core organisers are pictured in the centre, from left to right: Taavi Repän, Tauno Tiirats, Veronika Zadin, and Juhan Matthias Kahk.

My first talk was about running simulations on HPC, e.g. using apptainers. Probably because of free pizza there were two–three dozen of participants from institutes of Chemistry, Physics, and Technology, which is a surprisingly high number for the university of Tartu. It is a great start and I am looking forward to contribute more into strengthening collaboration between the institutes.

Colors in ASE

Updated colors for atoms in ASE in 2024 look like this:

For POV rendering there are several options: ASE2, ASE3, Glass, Glass2, Intermediate, JMOL, Pale, Simple, VMD. I like intermediate because it does not have an reflection and glare.

Working with cubes

Working with cubes can be tedious. I need to show a change in electronic density of a MOF. For that I made two cubes for neutral and charged MOF. Then took their difference using cube_tools, like this.

import numpy as np
from cube_tools import cube

# Load the cube files using the cube class
cube1 = cube('mof_opt_0.0.cube')
cube2 = cube('mof_opt_2.0.cube')

# Subtract the data from cube1 from cube2
cube_diff = cube('mof_opt_2.0.cube')
cube_diff.data = cube2.data - cube1.data

# Get z-axis data and find indices where z > 13.3 (jellium density)
z_indices_above_threshold = np.where(cube_diff.Z > 13.3)[0]

# Remove densities above z = 13.3 by setting them to zero
for idx in z_indices_above_threshold:
    cube_diff.data[:, :, idx] = 0

# Save the modified cube data to a new file
cube_diff.write_cube('cdd.cube')

Once I have got the charge density difference and opened it in VMD, I realised that one part of my MOF is right at the border of a periodic cell, so that part of density was split. So, I used a terminal command to shift the cube like this “cube_tools -t -24 -36 0 cdd.cube”. I had to shift manually positions of the atoms by taking into account the voxels size. Next challenge was hiding half of my MOF to clear the view. So I used this tcl syntax in VMD:

vmd > mol clipplane normal 0 0 0 {1 0 0}
vmd > mol clipplane center 0 0 0 {3 0 0}
vmd > mol clipplane status 0 0 0 1
vmd > mol clipplane normal 0 1 0 {1 0 0}
vmd > mol clipplane center 0 1 0 {3 0 0} 
vmd > mol clipplane status 0 1 0 1
vmd > mol clipplane normal 0 2 0 {1 0 0}
vmd > mol clipplane center 0 2 0 {3 0 0}
vmd > mol clipplane status 0 2 0 1

Here is the result – density is almost homogeneously spread over my MOF upon charging.

Some tests with GFN2-xTB

GFN2-xTB [10.1021/acs.jctc.8b01176] is a strange model. I have been testing GFN1 and GFN2 on OOH adsorption on Pt(111). GFN1 from TBLITE with ASE works well. It converges and optimizes to meaningful structures. GFN2 however behaves odd in terms of convergence and optimization. For instance, O–H bond becomes broken. I have tested GFN2 also with xtb, for which the input is quite complicated in comparison to ASE inputs. Anyway, it worked only when I specified the periodic conditions in both xtb.inp and Pt-OOH.coord files. Then I executed xtb like this:

xtb Pt-OOH.coord --gfn2 --tblite --opt --periodic --input xtb.inp
Optimization of Pt(111)–OOH with GFN2-xTB (xtb) resulting in O–H bond dissociation.

P.S. You can see that Pt(111) surface corrugates in case of my 2×2 model. For wider models, the surface remains flat.

Present of year 2023

I wish everyone a Merry Christmas and a Happy New Year!

As I present, let me share the discovery of this year.

Ferdium is a program that combines all messengers in a single window! I tried to distinguish between work and life using different messengers for years. For work, I used fleep.io. Unfortunately, they decided to close all freemium accounts and raise the prices this year. So, I switched to other messengers and eventually mixed them up. Luckily, I found Ferdium! Just see my print screen – all messengers in one app:

Go to ferdium.org to get it.

By the way, Opera provides a similar functionality, but it does not have so many app in it. For example, it does not have Element.

Zotero + chatGPT via pdfGEAR

Some time ago (in 2023), I linked Zotero with chatGPT by creating an environment with paper-qa and pyzotero like this:
conda create -n Zotero
conda activate Zotero
conda install pip
pip install paper-qa
pip install pyzotero
pip install bs4

That worked but felt way too complicated … like I am not going to use it on a daily basis. It also reminded me the very first experience with the Meta AI in late 2022 (which everyone already forgot).

Here is a much simpler recipe:

  1. Install Zotero add-on from github.com/retorquere/zotero-open-pdf to enable opening with external pdf viewers.
  2. Install pdfGEAR as your default pdf viewer (external to Zotero).

See how it works on my YouTube channel: youtu.be/4JSy2RsBLDE?si=Hbj7oq7gaOiq6END

DFT geometry optimizers

These are undeservedly less attention to optimizers than density functionals (concerning Jacob’s ladder). It is not even covered in the recent review: Best-Practice DFT Protocols for Basic Molecular Computational Chemistry. At the same time, in my current projects, the most resource-demanding was geometry optimization – the time spent on optimizing structures was much longer than a single-point calculation. Papers that introduce new (AI-based) optimizers promise significant speed-up. However, there are always some problems:

  1. The tested systems are different from my electrochemical interfaces.
  2. The code is not available or difficult to install.
  3. The code is outdated and contains bugs.
  4. Optimizers perform worse than the common ones, like QuasiNewton in ASE.

ASE wiki lists all internal and some external optimizers and provides their comparison. I have checked the most promising on a high-entropy alloy slab.

Observation 1. QuasiNewton outperforms all other optimizers. Point. I have run a standard GPAW/DFT/PBE/PW optimization with various optimizers:

Observation 2. Pre-optimizing the slab with a cheaper method does not reduce the number of optimization steps. I have preoptimized the geometry with TBLITE/DFTB/GFN1-xTB to continue with GPAW/DFT/PBE/PW. Preoptimization takes just some minutes and the obtained geometry looks similar to the DFT one but that does not reduce the number of DFT optimization steps.

OptimizerN steps*Time$N steps*#Total time#
BFGS1602:44:271703:01:26
LBFGS1502:30:351602:55:04
BondMin1202:46:271302:45:07
GPMin1205:26:233108:14:22
MLMin38verylong2812:31:29
FIRE3805:06:564405:56:54
QuasiNewton801:36:23902:00:10

Note * – the printed number of steps might different from the actuall number of calculations because each calculator has a different way of reporting that number.

Note $ – the time between the end of the first and last steps.

Note # – started from the TBLITE/DFTB/GFN1-xTB preoptimized geometry.

N.B! I have done my test only once in two runs: starting with slab.xyz and preoptized geometry. Runs were on similar nodes and all optimizations were done on the same node.

Conclusion. Do not believe in claims in articles advertizing new optimizers – Run your tests before using them.

A practical finding. The usual problem with calculations that require many optimization steps is that they need to fit into HPC time limits. On the restart, ASE usually rewrites the trajectory. Some optimizers (GPMin and AI-based) could benefit from reading the full trajectory. So, I started writing two trajectories and a restart file like this.

# Restarting
if os.path.exists(f'{name}_last.gpw') == True and os.stat(f'{name}_last.gpw').st_size > 0:
    atoms,calc = restart(f'{name}_last.gpw', txt=None)
    parprint(f'Restart from the gpw geometry.')
elif os.path.exists(f'{name}_full.traj') == True and os.stat(f'{name}_full.traj').st_size > 0:
    atoms = read(f'{name}_full.traj',-1)
    parprint(f'Restart with the traj geometry.')
else:
    atoms = read(f'{name}_init.xyz')
    parprint(f'Start with the initial xyz geometry.')

# Optimizing
opt = QuasiNewton(atoms, trajectory=f'{name}.traj', logfile=f'{name}.log')
traj= Trajectory(f'{name}_full.traj', 'a', atoms)
opt.attach(traj.write, interval=1)
def writegpw():
    calc.write(f'{name}_last.gpw')
opt.attach(writegpw, interval=1)
opt.run(fmax=0.05, steps=42)

Here are some details on the tests.

My gpaw_opt.py for DFT calculations on 24 cores:

# Load modules
from ase import Atom, Atoms
from ase.build import add_adsorbate, fcc100, fcc110, fcc111, fcc211, molecule
from ase.calculators.mixing import SumCalculator
from ase.constraints import FixAtoms, FixedPlane, FixInternals
from ase.data.vdw_alvarez import vdw_radii
from ase.db import connect
from ase.io import write, read
from ase.optimize import BFGS, GPMin, LBFGS, FIRE, QuasiNewton
from ase.parallel import parprint
from ase.units import Bohr
from bondmin import BondMin
from catlearn.optimize.mlmin import MLMin
from dftd4.ase import DFTD4
from gpaw import GPAW, PW, FermiDirac, PoissonSolver, Mixer, restart
from gpaw.dipole_correction import DipoleCorrection
from gpaw.external import ConstantElectricField
from gpaw.utilities import h2gpts
import numpy as np
import os

atoms = read('slab.xyz')
atoms.set_constraint([FixAtoms(indices=[atom.index for atom in atoms if atom.tag in [1,2]])])

# Set calculator
kwargs = dict(poissonsolver={'dipolelayer':'xy'},
              xc='RPBE',
              kpts=(4,4,1),
              gpts=h2gpts(0.18, atoms.get_cell(), idiv=4),
              mode=PW(400),
              basis='dzp',
              parallel={'augment_grids':True,'sl_auto':True,'use_elpa':True},
             )
calc = GPAW(**kwargs)

#atoms.calc = SumCalculator([DFTD4(method='RPBE'), calc])
#atoms.calc = calc

# Optimization paramters
maxf = 0.05

# Run optimization
###############################################################################

# 2.A. Optimize structure using MLMin (CatLearn).
initial_mlmin = atoms.copy()
initial_mlmin.set_calculator(calc)
mlmin_opt = MLMin(initial_mlmin, trajectory='results_mlmin.traj')
mlmin_opt.run(fmax=maxf, kernel='SQE', full_output=True)

# 2.B Optimize using GPMin.
initial_gpmin = atoms.copy()
initial_gpmin.set_calculator(calc)
gpmin_opt = GPMin(initial_gpmin, trajectory='results_gpmin.traj', logfile='results_gpmin.log', update_hyperparams=True)
gpmin_opt.run(fmax=maxf)

# 2.C Optimize using LBFGS.
initial_lbfgs = atoms.copy()
initial_lbfgs.set_calculator(calc)
lbfgs_opt = LBFGS(initial_lbfgs, trajectory='results_lbfgs.traj', logfile='results_lbfgs.log')
lbfgs_opt.run(fmax=maxf)

# 2.D Optimize using FIRE.
initial_fire = atoms.copy()
initial_fire.set_calculator(calc)
fire_opt = FIRE(initial_fire, trajectory='results_fire.traj', logfile='results_fire.log')
fire_opt.run(fmax=maxf)

# 2.E Optimize using QuasiNewton.
initial_qn = atoms.copy()
initial_qn.set_calculator(calc)
qn_opt = QuasiNewton(initial_qn, trajectory='results_qn.traj', logfile='results_qn.log')
qn_opt.run(fmax=maxf)

# 2.F Optimize using BFGS.
initial_bfgs = atoms.copy()
initial_bfgs.set_calculator(calc)
bfgs_opt = LBFGS(initial_bfgs, trajectory='results_bfgs.traj', logfile='results_bfgs.log')
bfgs_opt.run(fmax=maxf)

# 2.G. Optimize structure using BondMin.
initial_bondmin = atoms.copy()
initial_bondmin.set_calculator(calc)
bondmin_opt = BondMin(initial_bondmin, trajectory='results_bondmin.traj',logfile='results_bondmin.log')
bondmin_opt.run(fmax=maxf)

# Summary of the results
###############################################################################

fire_results = read('results_fire.traj', ':')
parprint('Number of function evaluations using FIRE:',
         len(fire_results))

lbfgs_results = read('results_lbfgs.traj', ':')
parprint('Number of function evaluations using LBFGS:',
         len(lbfgs_results))

gpmin_results = read('results_gpmin.traj', ':')
parprint('Number of function evaluations using GPMin:',
         gpmin_opt.function_calls)

bfgs_results = read('results_bfgs.traj', ':')
parprint('Number of function evaluations using BFGS:',
         len(bfgs_results))

qn_results = read('results_qn.traj', ':')
parprint('Number of function evaluations using QN:',
         len(qn_results))

catlearn_results = read('results_mlmin.traj', ':')
parprint('Number of function evaluations using MLMin:',
         len(catlearn_results))

bondmin_results = read('results_bondmin.traj', ':')
parprint('Number of function evaluations using BondMin:',
         len(bondmin_results))

Initial slab.xyz file:

45
Lattice="8.529357696932532 0.0 0.0 4.264678848466266 7.386640443507905 0.0 0.0 0.0 29.190908217261956" Properties=species:S:1:pos:R:3:tags:I:1 pbc="T T F"
Ir       0.00000000       1.62473838      10.00000000        5
Ru       2.81412943       1.62473838      10.00000000        5
Pt       5.62825885       1.62473838      10.00000000        5
Pd       1.40706471       4.06184595      10.00000000        5
Ag       4.22119414       4.06184595      10.00000000        5
Ag       7.03532356       4.06184595      10.00000000        5
Ag       2.81412943       6.49895353      10.00000000        5
Ru       5.62825885       6.49895353      10.00000000        5
Pt       8.44238828       6.49895353      10.00000000        5
Pt       0.00000000       0.00000000      12.29772705        4
Ag       2.81412943       0.00000000      12.29772705        4
Ru       5.62825885       0.00000000      12.29772705        4
Ru       1.40706471       2.43710757      12.29772705        4
Ir       4.22119414       2.43710757      12.29772705        4
Ag       7.03532356       2.43710757      12.29772705        4
Ag       2.81412943       4.87421514      12.29772705        4
Ir       5.62825885       4.87421514      12.29772705        4
Pd       8.44238828       4.87421514      12.29772705        4
Pd       1.40706471       0.81236919      14.59545411        3
Ir       4.22119414       0.81236919      14.59545411        3
Pt       7.03532356       0.81236919      14.59545411        3
Ag       2.81412943       3.24947676      14.59545411        3
Ir       5.62825885       3.24947676      14.59545411        3
Ir       8.44238828       3.24947676      14.59545411        3
Pd       4.22119414       5.68658433      14.59545411        3
Pt       7.03532356       5.68658433      14.59545411        3
Ag       9.84945299       5.68658433      14.59545411        3
Pd       0.00000000       1.62473838      16.89318116        2
Pd       2.81412943       1.62473838      16.89318116        2
Ag       5.62825885       1.62473838      16.89318116        2
Pt       1.40706471       4.06184595      16.89318116        2
Ag       4.22119414       4.06184595      16.89318116        2
Ag       7.03532356       4.06184595      16.89318116        2
Ru       2.81412943       6.49895353      16.89318116        2
Ru       5.62825885       6.49895353      16.89318116        2
Ru       8.44238828       6.49895353      16.89318116        2
Ir       0.00000000       0.00000000      19.19090822        1
Ag       2.81412943       0.00000000      19.19090822        1
Pt       5.62825885       0.00000000      19.19090822        1
Pd       1.40706471       2.43710757      19.19090822        1
Ag       4.22119414       2.43710757      19.19090822        1
Pd       7.03532356       2.43710757      19.19090822        1
Ag       2.81412943       4.87421514      19.19090822        1
Ru       5.62825885       4.87421514      19.19090822        1
Ir       8.44238828       4.87421514      19.19090822        1

My tblite_opt.py for DFTB calcualation with just one core. It takes some minutes but eventually crashes 🙁

# Load modules
from ase import Atom, Atoms
from ase.build import add_adsorbate, fcc100, fcc110, fcc111, fcc211, molecule
from ase.calculators.mixing import SumCalculator
from ase.constraints import FixAtoms, FixedPlane, FixInternals
from ase.data.vdw_alvarez import vdw_radii
from ase.db import connect
from ase.io import write, read
from ase.optimize import BFGS, GPMin, LBFGS, FIRE, QuasiNewton
from ase.parallel import parprint
from ase.units import Bohr
from tblite.ase import TBLite
import numpy as np
import os

# https://tblite.readthedocs.io/en/latest/users/ase.html

atoms = read('slab.xyz')
atoms.set_constraint([FixAtoms(indices=[atom.index for atom in atoms if atom.tag in [1,2]])])

# Set calculator
calc = TBLite(method="GFN1-xTB",accuracy=1000,electronic_temperature=300,max_iterations=300)
atoms.set_calculator(calc)
qn_opt = QuasiNewton(atoms, trajectory='results_qn.traj', logfile='results_qn.log', maxstep=0.1)
qn_opt.run(fmax=0.1)

To compare structures I have used MDanalysis, which unfortunately does not work with ASE traj, so I prepared xyz-files with “ase convert -n -1 file.traj file.xyz”

import MDAnalysis as mda
from MDAnalysis.analysis.rms import rmsd
import sys

def coord(file_name):
    file  = mda.Universe(f"{file_name}.xyz")
    atoms = file.select_atoms("index 1:9")
    return  atoms.positions.copy()

print(rmsd(coord(sys.argv[1]),coord(sys.argv[2])))

An instruction on installation of GPAW. TBLITE can be installed as “conda install -c conda-forge tblite”.

GPAW installation with pip

Between installation with conda and compilation of libraries, an intermediate path – installation of GPAW with pip – is a compromise for those who wish to text specific GPAW branches or packages.

For example, I wish to text self-interaction error correction (SIC) and evaluate Bader charges with pybader. Neither SIC nor pybader is compatible with the recent GPAW. Here is not to get a workable version.

# numba in pybader is not compatible with python 3.11, so create a conda environment with python 3.10
conda create -n gpaw-pip python=3.10 
conda activate gpaw-pip

conda install -c conda-forge libxc libvdwxc
conda install -c conda-forge ase
# ensure that you install the right openmpi (not external)
conda install -c conda-forge openmpi ucx
conda install -c conda-forge compilers
conda install -c conda-forge openblas scalapack
conda install -c conda-forge pytest
pip install pybader

# Get a developer version of GPAW with SIC
git clone -b dm_sic_mom_update https://gitlab.com/alxvov/gpaw.git
cd gpaw
cp siteconfig_example.py siteconfig.py

# In the siteconfig.py rewrite
'''
fftw = True
scalapack = True
if scalapack:
    libraries += ['scalapack']
'''

unset CC
python -m pip install -e .
gpaw info