Working with RSS

RSS is a site summary – a format used to create feeds with articles’ metadata, including Graphical Abstract, Title, Publication data, Authors, and Abstract.

Here is my way of organizing RSS flows. Let us take as an example ACS journals. Their RSS feeds are all given on one page:

https://pubs.acs.org/page/follow.html

I have copied them all by opening the html-code and taking the urls, which I then merge into a single opml-file at https://opml-gen.ovh.

Then I uploaded the opml-file to a very old but still working webpage:

http://www.feedrinse.com

feedrinse merges all feeds into one “channel” feed. Here is my merged feed:

http://www.feedrinse.com/services/channel/?chanurl=7bde3acd38bc31fc705118deb2300ca1

Using feedrince’s interface is tricky. Check this blogposts for a step-by-step instruction:

https://www.journalism.co.uk/skills/how-to-tame-your-rss-sources-using-feed-rinse/s7/a53238/

In my case, feedrince’s filters do not work. So, I turned to https://siftrss.com/ , where one can set up a regex filter. You can check your regex expression at https://regex101.com/. Here is my example:

/(electro)|(cataly)|(double)/

which finds all words containing “electro” or “cataly” or “double”.

From siftrss I got a new feed that I entered to my RSS reader.

I am currently using online and mobile RSS readers, which are synced together. Namely, I use Nextcloud News, because I have a Nextcloud account.

In these RSS readers, one can see the essential info about each article and star articles. It is a pleasure to swipe articles on the mobile phone and star interesting articles. Later one can open the stared articles from the online reader and go to the publisher’s webpage. At that stage, I also use Reader View (in Firefox) and listen to the abstract.

Nextcloud news
Nextcloud News (mobile)

P.S. Here are all ACS feeds (as for dec 2022):

http://feeds.feedburner.com/acs/aabmcb
http://feeds.feedburner.com/acs/aaembp
http://feeds.feedburner.com/acs/aaemcq
http://feeds.feedburner.com/acs/aamick
http://feeds.feedburner.com/acs/aanmf6
http://feeds.feedburner.com/acs/aapmcd
http://feeds.feedburner.com/acs/aastgj
http://feeds.feedburner.com/acs/abmcb8
http://feeds.feedburner.com/acs/abseba
http://feeds.feedburner.com/acs/acbcct
http://feeds.feedburner.com/acs/accacs
http://feeds.feedburner.com/acs/achre4
http://feeds.feedburner.com/acs/achsc5
http://feeds.feedburner.com/acs/acncdm
http://feeds.feedburner.com/acs/acscii
http://feeds.feedburner.com/acs/acsodf
http://feeds.feedburner.com/acs/aeacb3
http://feeds.feedburner.com/acs/aeacc4
http://feeds.feedburner.com/acs/aeecco
http://feeds.feedburner.com/acs/aelccp
http://feeds.feedburner.com/acs/aesccq1
http://feeds.feedburner.com/acs/aewcaa
http://feeds.feedburner.com/acs/afsthl
http://feeds.feedburner.com/acs/aidcbc
http://feeds.feedburner.com/acs/amacgu
http://feeds.feedburner.com/acs/amachv
http://feeds.feedburner.com/acs/amclct
http://feeds.feedburner.com/acs/amlccd
http://feeds.feedburner.com/acs/amlcef
http://feeds.feedburner.com/acs/amrcda
http://feeds.feedburner.com/acs/anaccx
http://feeds.feedburner.com/acs/ancac3
http://feeds.feedburner.com/acs/ancham/
http://feeds.feedburner.com/acs/aoiab5
http://feeds.feedburner.com/acs/apaccd
http://feeds.feedburner.com/acs/apcach
http://feeds.feedburner.com/acs/apchd5
http://feeds.feedburner.com/acs/aptsfn
http://feeds.feedburner.com/acs/asbcd6
http://feeds.feedburner.com/acs/ascecg
http://feeds.feedburner.com/acs/ascefj
http://feeds.feedburner.com/acs/bcches
http://feeds.feedburner.com/acs/bichaw
http://feeds.feedburner.com/acs/bomaf6
http://feeds.feedburner.com/acs/cgdefu
http://feeds.feedburner.com/acs/chreay
http://feeds.feedburner.com/acs/cmatex
http://feeds.feedburner.com/acs/crtoec
http://feeds.feedburner.com/acs/enfuem
http://feeds.feedburner.com/acs/esthag
http://feeds.feedburner.com/acs/estlcu
http://feeds.feedburner.com/acs/iecred
http://feeds.feedburner.com/acs/inocaj
http://feeds.feedburner.com/acs/jaaucr
http://feeds.feedburner.com/acs/jacsat
http://feeds.feedburner.com/acs/jafcau
http://feeds.feedburner.com/acs/jamsef
http://feeds.feedburner.com/acs/jceaax
http://feeds.feedburner.com/acs/jceda8
http://feeds.feedburner.com/acs/jcisd8
http://feeds.feedburner.com/acs/jctcce
http://feeds.feedburner.com/acs/jmcmar
http://feeds.feedburner.com/acs/jnprdf
http://feeds.feedburner.com/acs/joceah
http://feeds.feedburner.com/acs/jpcafh
http://feeds.feedburner.com/acs/jpcbfk
http://feeds.feedburner.com/acs/jpccck
http://feeds.feedburner.com/acs/jpclcd
http://feeds.feedburner.com/acs/jprobs
http://feeds.feedburner.com/acs/langd5
http://feeds.feedburner.com/acs/mamobx
http://feeds.feedburner.com/acs/mpohbp
http://feeds.feedburner.com/acs/nalefd
http://feeds.feedburner.com/acs/oprdfk
http://feeds.feedburner.com/acs/orgnd7
http://feeds.feedburner.com/acs/orlef7

Playing with Galactica

Installation of Galactica is as easy as:

conda create -n papers python=3.8
conda activate papers
pip install galai transformers accelerate

Now you can work with the simplest Galactica models (125m, 1.3b, 6.7b) using CPUs. Here is my script:

from transformers import AutoTokenizer, OPTForCausalLM
import sys

tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-6.7b")
tokenizer.pad_token_id = 1
tokenizer.padding_side = 'left'
tokenizer.model_max_length = 200
model = OPTForCausalLM.from_pretrained("facebook/galactica-6.7b", device_map="auto")

#input_text = '# Introduction \n\n The main idea of the paper "Supervised hashing for image retrieval via image representation learning" is'
#input_text = "# Review \n\n The main idea of the paper 'On the thickness of the double layer in ionic liquids'"
#input_text = "# Review High entropy alloys in electrocatalysis"
input_text = str(sys.argv[1])
input_ids = tokenizer(input_text, padding='max_length', return_tensors="pt").input_ids

outputs = model.generate(input_ids, max_new_tokens=200,
                         do_sample=True,
                         temperature=0.7,
                         top_k=25,
                         top_p=0.9,
                         no_repeat_ngram_size=10,
                         early_stopping=True)
print(tokenizer.decode(outputs[0]).lstrip('<pad>'))

Run it on your laptop as:

python script.py "YOUR QUERY"

For example, let us check what Galactica know about HEA:

python script.py "High entropy alloys in catalysis "

6.7b model will give:

High entropy alloy catalysis (HEAC) is a new concept for catalytic applications. A series of HEAs with a similar chemical composition (CoCrFeNiMn) were prepared by arc-melting and characterized by X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), and high resolution transmission electron microscopy (HRTEM). The catalytic performance of the HEAs was tested in the CO oxidation reaction. The catalytic activity of the HEAs is compared with that of the pure metals and the HEA-supported Pt catalysts. The results show that the HEAs are active in the CO oxidation reaction, and that the activity is comparable to that of the Pt catalysts. The HEAs have a much lower activity than the pure metals. XPS and HRTEM results show that the HEAs have a different surface structure than the pure metals, which is probably the reason for the high catalytic activity of the HEA.

Galactica

Also, let us review a paper by the CHEAC founders:

python script.py '# Introduction \n\n The main idea of the paper "Self-supported Pt–CoO networks combining high specific activity with high surface area for oxygen reduction" is'

“Self-supported Pt–CoO networks combining high specific activity with high surface area for oxygen reduction” is to report the synthesis of highly porous self-supported electrocatalysts, which combine high surface area with high specific activity for the oxygen reduction reaction (ORR). The synthesis is based on a self-supported network of Pt doped CoO (Pt-CoO) nanoparticles, which are prepared by a two-step process. In the first step, Pt-doped Co₃O₄ (Pt-Co₃O₄) nanoparticles are formed via the thermal decomposition of Co- and Pt-oleate complexes, followed by the oxidation of Pt-Co₃O₄ to Pt-CoO at 550 °C. The resulting porous self-supported network consists of Pt-CoO nanoparticles with diameters of 4–5 nm and a high surface area of 130 m2/g. The specific activity of the Pt-CoO network for the ORR is 2.6 times higher than that of the Pt/C catalyst, and the mass activity is 2.

Galactica

You can run the same code in Google Drive with colab.

Here are some links:

https://huggingface.co/facebook/galactica-125m
https://huggingface.co/spaces/morenolq/galactica-base/blob/main/app.py
https://github.com/paperswithcode/galai
https://github.com/paperswithcode/galai/issues/39

P.S. https://chat.openai.com/chat seems to be much cooler!

Positive writing

Here are my notes and thoughts about positive writing.

https://twitter.com/grammarly/status/1457749263904133124

Positive writing helps to communicate better with readers. Naturally, positive writing is more concrete than the negative one. For instance, just removing “not” in  “bananas are not vegetables” or “bananas are not blue” and turning it into positive “bananas are yellow fruits” results in a clear undeniable statement. Another aspect of positive writing is tuning the reader’s attitude towards your ideas. Psychologically, after going through easily agreeable sentences, like “bananas are sweet” and “bananas are colorful”, the reader will be more ready to agree on your conclusion that “a banana is a comfort and nutritious choice for a lunchbox”.

More text with examples are under editing 🙂

External XC libraries for GPAW

There are two libraries of XC functionals that can be used in GPAW. These are libxc and libvdwxc. Conda installation of GPAW automatically picks them. You can check whether your GPAW connects to libxc and libvdwxc like gpaw info.

libvdwxc is useful when you wish to run calculations with vdW-functionals and GPAW. Such as BEEF-vdW. Herewith, libvdwxc implementation of vdW-functionals are better parallelized than the native GPAW implementation. For example, add the following line to your GPAW calculator xc={'name':'BEEF-vdW','backend':'libvdwxc'} to run a calculation with the BEEF-vdW functional. BEEF-vdW calculations with libvdwxc can run as fast as PBE-like calculations if you use the proper grid, like parallel={'augment_grids':True,'sl_auto':True}. Here is a list of libvdwxc functionals: gitlab.com/libvdwxc/libvdwxc

Note that the following GPAW page is somewhat outdated:
wiki.fysik.dtu.dk/gpaw/documentation/xc/vdw.html

libxc is useful when you wish to run calculations with functionals that are not implemented in GPAW. Note that GPAW implementation is more efficient. There are many ways to call for libxc. For example, add the following line to your GPAW calculator xc='MGGA_X_SCAN+MGGA_C_SCAN' to run a calculation with the SCAN functional. Nore that GPAW setups are for LDA, PBE, and RPBE. You can generate setups specifically for your functional if it is GGA or HGGA. Here is a list of libxc functionals: tddft.org/programs/libxc/functionals/

Memory issues in GPAW

Try to use default parameters for the calculator. Simple and often useful.

Below you find a list of suggestions that should be considered when encountering a memory problem – when a calculation does not fit an allocated memory limit.

Note1: You can use –dry-run to make memory estimation and check for parallelization over kpts, domain, and bands as well as use of symmetry.

gpaw python --dry-run=N script.py

Mind that the memory estimation with –dry-run is underestimated. https://gitlab.com/gpaw/gpaw/-/issues/614

Note2: You can use def monkey_patch_timer() to write information about memory usage into mem.* files. Call the function before the actual work is started.

from gpaw.utilities.memory import monkey_patch_timer

monkey_patch_timer()

SUBMISSION OPTIONS

  1. Try increasing the total memory or memory per tasks in the submission script, if you are sure that everything else (see below) is correct.
  2. Try increasing number of tasks (CPUs×threading) and nodes, if only you are sure that everything else (see below) is correct. Note that your calculation accesses all the nodes’ memory independent on the number of allocated tasks, but not not all memory is actually available because some is used by the OS and other running jobs. Also, increasing the number of tasks decreases parallelization efficiency and might decrease the queue priority (depending on the queuing system).

GEOMETRY

  1. Check the model geometry. Perhaps, you can make a more compact model. For example, with orthorhombic=False option.
  2. In slab calculations, use just enough vacuum. Mind that PW mode is egg-box effect free, so, with the dipole-layer correction, you can reduce the vacuum layer significantly. Just check for the energy convergence.
    https://wiki.fysik.dtu.dk/gpaw/tutorialsexercises/electrostatics/dipole_correction/dipole.htm
  3. Ensure that symmetry is used. Sometimes, the calculator uses less symmetry than there is. In that case, recheck the geometry. Remember that you can preserve symmetry during optimization. https://wiki.fysik.dtu.dk/ase/ase/constraints.html#the-fixsymmetry-class

PARALLELIZATION

In general, parallelization over domains requires less memory than parallelization over bands and k-points, but the default order of parallelization is k-points, then domains, then bands. Remember the formula kpts×domain×bands = N, where N is the number of tasks (CPUs).

  1. In most cases, the default parallelization with symmetry is most efficient in terms of memory usage.
  2. Reprioritizing parallelization over domain can reduce memory consumption, but also slow down the calculation as parallelization over k-points is usually more time-efficient.
  3. Parallelization over any type can be suppressed by setting, for example, for domains like parallel = {'domain':1}. In the LCAO mode, you should check whether parallelizing over bands, like parallel = {'bands':2}, helps with the memory issue.

CALCULATOR PARAMETERS

  1. Consider using a different mode. “With LCAO, you have fewer degrees of freedom, so memory usage is low. PW mode uses more memory and FD a lot more.” https://wiki.fysik.dtu.dk/gpaw/documentation/basic.html#manual-mode
  2. Change calculation parameters, like h, cutoff, setups (like setups={'Pt': '10'}), basis (like basis={'H': 'sz', 'C': 'dz'}), etc. Check for convergence of properties, like in this tutorial: wiki.fysik.dtu.dk/gpaw/summerschools/summerschool22/catalysis/catalysis.html#extra-material-convergence-test
  3. It is possible to reduce the memory by changing the mixer options.
    https://wiki.fysik.dtu.dk/gpaw/documentation/convergence.html

Matching positions to grid in GPAW

Slab (2D) geometry.

from ase.build import fcc111
from gpaw import GPAW, PW
from gpaw.utilities import h2gpts
import numpy as np

# Set variables
div   = 4.0    # number of grid points is divisible by div
grid  = 0.16   # desired grid spacing
left  = 6.0    # vacuum at the left border of the slab
vacuum= 8.0    # vacuum at the right border above the slab/adsorbate
cutoff= 400    # PW cut-off
name  ='slab'  # output file name
func  ='RPBE'  # functional name; with libvdwxc you can use:
               # {'name':'BEEF-vdW', 'backend':'libvdwxc'}
kpts  = 4      # number of k-points

# Define a slab with fixed grid and atoms positions
atoms = fcc111('Pt', size=(4,4,4), vacuum=left)
# add adsorbate here
zmax  = np.max([i[2] for i in atoms.get_positions()])
cell  = atoms.get_cell()
cell[2][2]= grid*div*round((zmax+vacuum)/grid/div)
atoms.set_cell(cell)

# Set the default calculator
calc = GPAW(poissonsolver={'dipolelayer':'xy'},
            mode=PW(cutoff),
            xc=func,
            gpts=h2gpts(grid, atoms.get_cell(), idiv=div),
            kpts=(kpt,kpt,1),
            parallel={'augment_grids':True,'sl_auto':True},
            txt=f'{name}.txt',
           )

# Run calculation
atoms.calc = calc
atoms.get_potential_energy()

When choosing the number of k-points, consider using

kpts = {'density': 2.5,'gamma':True,'even':True}

Read
wiki.fysik.dtu.dk/gpaw/documentation/basic.html#manual-kpts
and
wiki.fysik.dtu.dk/gpaw/tutorialsexercises/structureoptimization/surface/surface.html

Molecular (0D) geometry

from ase.io import read, write
from ase.build import molecule
from ase.optimize import QuasiNewton
from gpaw import GPAW, PW
from gpaw.cluster import *
from gpaw.utilities import h2gpts

# Set variables
div   = 4.0    # number of grid points is divisible by div
grid  = 0.16   # desired grid spacing
vacuum= 8.0    # vacuum around the molecule
cutoff= 400    # PW cut-off
name  ='H2O'   # molecule name from ase.build
func  ='RPBE'  # functional name; with libvdwxc you can use:
               # {'name':'BEEF-vdW', 'backend':'libvdwxc'}
fmax  = 0.05   # maximum force in optimization
smax  = 11     # maximum steps in optimization

# Define a box with fixed grid and atoms positions
atoms = Cluster(molecule(name))
atoms.minimal_box(border=vacuum, h=grid, multiple=div)
# atoms.translate([0.01,0.02,0.03]) # probably not needed

# Set calculator
calc = GPAW(mode=PW(cutoff),
            xc = func,
            gpts=h2gpts(grid, atoms.get_cell(), idiv=div),
            parallel={'augment_grids':True,'sl_auto':True},
            txt=f'{name}.txt',
           )

# Run optimization
atoms.calc = calc
opt = QuasiNewton(atoms, trajectory=f'{name}.traj', logfile=f'{name}.log')        opt.run(fmax=fmax, steps=smax)

Installing GPAW with conda

[Updated on 14.09.2022]

In short, in a clean environment, everything should work with just five lines:

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh

Initialize conda. If it is in the .bashch, source it. If not, source “PATHTOCONDA/miniconda3/etc/profile.d/conda.sh”.

conda create --name gpaw python=3
conda activate gpaw
conda install -c conda-forge gpaw=22.8=*openmpi*

For details, see the description below.

1. Install conda – software and environment management system.

Here is the official instruction: docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html

On August 2022, run these:

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh

If you wish to autostart conda, allow it to write to your .bashrc.

P.S. Here are good intros to conda:

2. Create a conda virtual environment:

conda create --name gpaw python=3.10

If needed, remove the environment as:

conda remove --name gpaw --all

You can check the available environments as:

conda env list

3. Activate the virtual environment.

conda activate gpaw

4. Install gpaw:

Ensure that no interfering modules and environments are loaded.

Purge modules by executing:

module purge

To check whether some code (like mpirun) has an alternative path, try:

which codename

or

codename --version

There should be no mpirun, ase, libxc, numpy, scipy, etc. Otherwise, the installation with conda will most probably fail due to conflicting paths.

4.1. It is safer to install using gpaw*.yml file from vliv/conda directory on FEND:

conda env create -f gpaw.yml

Note that there are many yml files with different versions of GPAW.

4.2. Pure installation is simple, but might not work:

conda install -c conda-forge gpaw=*=*openmpi*

Recently, there were problems with openmpi. Try downgrading it to version 4.1.2:

conda install -c conda-forge openmpi=4.1.2

You might with to install ucx but be aware that there are many problems with it; e. g. depending on mlx version:

conda install -c conda-forge ucx

If you get an error about GLIBCXX, try upgrading gcc:

conda install -c conda-forge gcc=12.1.0

4.3. To make a quick check of the installation, run “gpaw -P 2 test” or “gpaw info”.

The installation might fail. In case you succeed, save the yml file as:

conda env export | grep -v "^prefix: " > gpaw.yml

Now you can use it to install gpaw as:

conda env create -f gpaw.yml

To properly test the installation install pytest and follow wiki.fysik.dtu.dk/gpaw/devel/testing.html. That might take hours.

conda install -c conda-forge pytest pytest-xdist 

5. If needed, install extra packages within your specific conda environment (gpaw).

To apply D4 dispersion correction:

conda install -c conda-forge dftd4 dftd4-python

To analyze trajectories:

conda install -c conda-forge mdanalysis

To analyze electronic density (some might not work):

pip install git+https://github.com/funkymunkycool/Cube-Toolz.git
pip install git+https://github.com/theochem/grid.git
pip install git+https://github.com/theochem/denspart.git
pip install pybader
pip install cpmd-cube-tools
conda install -c conda-forge chargemol

To use catlearn:

pip install catlearn

To work with crystal symmetries:

conda install -c conda-forge spglib

Extra for visualization (matplotlib comes with ASE):

conda install -c conda-forge pandas seaborn bokeh jmol

To use notebooks (you might need to install firefox as well):

conda install -c conda-forge jupyterlab nodejs jupyter_contrib_nbextensions 

6. Run calculations by adding these lines to the submission script:

Note1: Check the path and change the USERNAME

Note2: Turn off ucx.

Note3: You may play with the number of openmp threads.

module purge
source "/groups/kemi/USERNAME/miniconda3/etc/profile.d/conda.sh"
conda activate gpaw
export OMP_NUM_THREADS=1
export OMPI_MCA_pml="^ucx"
export OMPI_MCA_osc="^ucx"
mpirun gpaw python script.py

Note4: Check an example in vliv/conda/sub directory.

7. Speeding-up calculations.

Add the “parallel” keyword to GPAW calculator:

parallel = {'augment_grids':True,'sl_auto':True},

For more options see wiki.fysik.dtu.dk/gpaw/documentation/parallel_runs/parallel_runs.html#manual-parallel. For LCAO mode, try ELPA. See wiki.fysik.dtu.dk/gpaw/documentation/lcao/lcao.html#notes-on-performance.

parallel = {'augment_grids':True,'sl_auto':True,'use_elpa':True},

For calculations with vdW-functionals, use libvdwxc:

xc = {'name':'BEEF-vdW', 'backend':'libvdwxc'},

8. If needed, add fixes.

To do Bayesian error estimation (BEE) see doublelayer.eu/vilab/2022/03/30/bayesian-error-estimation-for-rpbe/.

To use MLMin/NEB apply corrections from github.com/SUNCAT-Center/CatLearn/pulls

9. Something worth trying:

Atomic Simulation Recipes:

asr.readthedocs.io/en/latest/

gpaw-tools:

github.com/lrgresearch/gpaw-tools/

www.sciencedirect.com/science/article/pii/S0927025622000155

ase-notebook (won’t install at FEND because of glibc 2.17):

github.com/chrisjsewell/ase-notebook

ase-notebook.readthedocs.io/en/latest/

gpaw benchmarking:

github.com/OleHolmNielsen/GPAW-benchmark-2021

github.com/mlouhivu/gpaw-benchmarks

members.cecam.org/storage/presentation/Ask_Hjorth_Larsen-1622631504.pdf

Useful tips

Regex
^.*(A|B).*(A|B).*$
Nano
see https://www.nano-editor.org/dist/latest/cheatsheet.html
alt+U to undo
alt+a to start a selection
alt+shift+} to indent the selection

Bayesian Error Estimation for RPBE

Here is a trick for making the Bayesian Error Estimation (BEE) with the RPBE functional. Just edit the lines in ASE and GPAW codes by adding RPBE as an exception.

To find the needed files, run

find ./ -name "bee.py"

In ase/dft/bee.py change one line:

class BEEFEnsemble:



            if self.xc in ['BEEF-vdW', 'BEEF', 'PBE', 'RPBE']: # add RPBE
                self.beef_type = 'beefvdw'

In gpaw/xc/bee.py add two lines:

class BEEFEnsemble:
    """BEEF ensemble error estimation."""
    def __init__(self, calc):



        # determine functional and read parameters
        self.xc = self.calc.get_xc_functional()
        if self.xc == 'BEEF-vdW':
            self.bee_type = 1
        elif self.xc == 'RPBE': # catch the RPBE exchange functional
            self.bee_type = 1   # assign BEEF coefficients the RBPE

Below we use BEEF-vdW, RPBE, and PBE dimensionless density (n) with gradient (s) and apply BEEF coefficients (E₀, ΔEᵢ) to evaluate the BEE as the standard deviation for the ensemble total energies with the variable enhancement factor (F(s,θᵢ)).


from ase import Atoms
from ase.dft.bee import BEEFEnsemble
from ase.parallel import parprint
from gpaw import GPAW
import time

for xc in ['BEEF-vdW','RPBE','PBE']:
    start_time = time.time()

    h2 = Atoms('H2',[[0.,0.,0.],[0.,0.,0.741]]) #exp. bond length
    h2.center(vacuum=3)
    cell = h2.get_cell()

    calc = GPAW(xc=xc,txt='H2_{0}.txt'.format(xc))
    h2.calc = calc
    e_h2 = h2.get_potential_energy()
    ens = BEEFEnsemble(calc)
    de_h2 = ens.get_ensemble_energies()
    del h2, calc, ens

    h = Atoms('H')
    h.set_cell(cell)
    h.center()
    calc = GPAW(xc=xc,txt='H_{0}.txt'.format(xc), hund=True)
    h.calc = calc
    e_h = h.get_potential_energy()
    ens = BEEFEnsemble(calc)
    de_h = ens.get_ensemble_energies()
    del h, calc, ens

    E_bind = 2*e_h - e_h2
    dE_bind = 2*de_h[:] - de_h2[:]
    dE_bind = dE_bind.std()
    
    parpting('{0} functional'.format(xc))
    parprint('Time: {0} s'.format(round(time.time()-start_time,0)))
    parprint('E_bind: {0} eV'.format(round(E_bind,4)))
    parprint('Error bar {0} eV'.format(round(dE_bind,4)))