Showing posts with label ROCKS. Show all posts
Showing posts with label ROCKS. Show all posts

21 March 2017

635. Installing R on Rocks 5.4.3

Rocks 5.4.3 is based on CentOS 5.6 which is practically ancient by now (released Jan 2011).

Either way, when dealing with someone else's cluster its better to not fiddle too much with what is already working.

Here's a not at all elegant way of install R on Rocks 5.4.3
wget http://mirror.nsw.coloau.com.au/epel/5/x86_64/R-core-3.3.2-3.el5.x86_64.rpm
wget http://mirror.nsw.coloau.com.au/epel/5/x86_64/R-3.3.2-3.el5.x86_64.rpm 
wget http://mirror.nsw.coloau.com.au/epel/5/x86_64/R-devel-3.3.2-3.el5.x86_64.rpm 
wget http://mirror.nsw.coloau.com.au/epel/5/x86_64/libRmath-3.3.2-3.el5.x86_64.rpm 
wget http://mirror.nsw.coloau.com.au/epel/5/x86_64/libRmath-devel-3.3.2-3.el5.x86_64.rpm 
wget http://mirror.nsw.coloau.com.au/epel/5/x86_64/R-core-devel-3.3.2-3.el5.x86_64.rpm
wget http://mirror.nsw.coloau.com.au/epel/5/x86_64/libssh2-0.18-10.el5.x86_64.rpm 
wget http://mirror.nsw.coloau.com.au/epel/5/x86_64/xdg-utils-1.0.2-4.el5.noarch.rpm 
wget http://mirror.centos.org/centos/5/os/x86_64/CentOS/xz-devel-4.999.9-0.3.beta.20091007git.el5.x86_64.rpm
wget http://mirror.centos.org/centos/5/os/x86_64/CentOS/texinfo-tex-4.8-14.el5.x86_64.rpm
wget http://mirror.centos.org/centos/5/os/x86_64/CentOS/texinfo-4.8-14.el5.x86_64.rpm

sudo yum install R-3.3.2-3.el5.x86_64.rpm libRmath-devel-3.3.2-3.el5.x86_64.rpm libRmath-3.3.2-3.el5.x86_64.rpm R-devel-3.3.2-3.el5.x86_64.rpm R-core-3.3.2-3.el5.x86_64.rpm R-core-devel-3.3.2-3.el5.x86_64.rpm libssh2-0.18-10.el5.x86_64.rpm xdg-utils-1.0.2-4.el5.noarch.rpm texinfo-tex-4.8-14.el5.x86_64.rpm xz-devel-4.999.9-0.3.beta.20091007git.el5.x86_64.rpm texinfo-4.8-14.el5.x86_64.rpm 
[..] Total size: 169 M Downloading Packages: Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Installing : libssh2 1/11 Installing : libRmath 2/11 Installing : texinfo 3/11 Installing : texinfo-tex 4/11 Installing : libRmath-devel 5/11 Installing : xz-devel 6/11 Installing : xdg-utils 7/11 Installing : R-core 8/11 Installing : R-core-devel 9/11 Installing : R-devel 10/11 Installing : R 11/11 Installed: R.x86_64 0:3.3.2-3.el5 R-core.x86_64 0:3.3.2-3.el5 R-core-devel.x86_64 0:3.3.2-3.el5 R-devel.x86_64 0:3.3.2-3.el5 libRmath.x86_64 0:3.3.2-3.el5 libRmath-devel.x86_64 0:3.3.2-3.el5 libssh2.x86_64 0:0.18-10.el5 texinfo.x86_64 0:4.8-14.el5 texinfo-tex.x86_64 0:4.8-14.el5 xdg-utils.noarch 0:1.0.2-4.el5 xz-devel.x86_64 0:4.999.9-0.3.beta.20091007git.el5 Complete!

Testing:
R
R version 3.3.2 (2016-10-31) -- "Sincere Pumpkin Patch" Copyright (C) 2016 The R Foundation for Statistical Computing Platform: x86_64-redhat-linux-gnu (64-bit) R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. Natural language support but running in an English locale R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > q() Save workspace image? [y/n/c]: n

19 May 2013

421. NWChem 6.3 on ROCKS 5.4.3/CentOS 5.6

Update 23 May 2013: The execution times are pretty much the same as for 6.1.1 with a new patch. I've updated the instructions below to incorporate this new patch (http://www.nwchem-sw.org/images/Iswtch.patch.gz)

Update 21 May 2013:
The execution times can be improved considerably by setting
ARMCI_NETWORK=SOCKETS

They are still ca 30% longer than 6.1.1 though due to slower SCF convergence.
See http://www.nwchem-sw.org/index.php/Special:AWCforum/st/id834/Nwchem_6.3_running_2-5_times_slo....html

UPDATE 20 May 2013:
Nwchem 6.3 is very slow compared to 6.1.1. A six-core run (out of eight cores available) was 121 s using 6.1.1 but 254 seconds on 6.3!

I observed this on debian as well: 6.3 on debian is five times slower (190s vs 40 s for example at 8 cores in http://verahill.blogspot.com.au/2013/05/414-frequency-vs-cores-crude.html) than 6.1.1. Not sure why that is.

Original:
NWChem 6.3 is out now. Here's how to build it on ROCKS 5.4.3 (based on Centos 5.6) for CPU-based calculations (currently only CCSD(T) can take advantage of GPU/CUDA anyway).

To build on debian, see http://verahill.blogspot.com.au/2013/05/424-nwchem-63-on-debian-wheezy.html

This assumes that you've got a proper build environment (gcc, fortran, openmpi) installed.

Openblas:
I've added all users who do computations to the group compchem.
sudo mkdir /share/apps/openblas
sudo chown $USER:compchem /share/apps/openblas
cd ~/tmp
wget http://nodeload.github.com/xianyi/OpenBLAS/tarball/v0.1.1
tar xvf v0.1.1
cd xianyi-OpenBLAS-e6e87a2/
wget http://www.netlib.org/lapack/lapack-3.4.1.tgz
make all BINARY=64 CC=/usr/bin/gcc FC=/usr/bin/gfortran USE_THREAD=0 INTERFACE64=1 1> make.log 2>make.err

make PREFIX=/share/apps/openblas install
cp lib*.*  /share/apps/openblas/lib
sudo chmod 755 /share/apps/openblas -R

For later use with nwchem and ecce, add /share/apps/openblas/lib to /etc/ld.so.conf and do
sudo ldconfig

Put
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/share/apps/openblas/lib
in ~/.bashrc and/or queue files.

NWChem
I've added all users who do computations to the group compchem.
sudo mkdir /share/apps/nwchem/
sudo chown $USER:compchem /share/apps/nwchem/

cd /share/apps/nwchem
wget http://www.nwchem-sw.org/download.php?f=Nwchem-6.3-src.2013-05-17.tar.gz
tar xvf Nwchem-6.3-src.2013-05-17.tar.gz 
cd nwchem-6.3-src.2013-05-17/
cd src/
wget http://www.nwchem-sw.org/images/Iswtch.patch.gz
gzip -d Iswtch.patch
patch -p0 < Iswtch.patch
cd ../
export LARGE_FILES=TRUE
export TCGRSH=/usr/bin/ssh
export NWCHEM_TOP=`pwd`
export NWCHEM_TARGET=LINUX64
export NWCHEM_MODULES="all python"
export PYTHONHOME=/opt/rocks
export PYTHONVERSION=2.4
export USE_MPI=y
export USE_MPIF=y
export USE_MPIF4=y
export MPI_LOC=/opt/openmpi
export MPI_INCLUDE=/opt/openmpi/include
export LIBRARY_PATH=$LIBRARY_PATH:/opt/openmpi/lib:/share/apps/openblas
export LIBMPI="-lmpi -lopen-rte -lopen-pal -ldl -lmpi_f77 -lpthread"
export BLASOPT="-L/share/apps/openblas/lib -lopenblas -lopenblas_nehalem-r0.1.1 -lopenblas_nehalemp-r0.1.1"

export ARMCI_NETWORK=SOCKETS

cd $NWCHEM_TOP/src
export FC=gfortran
make clean
make  nwchem_config
make  FC=gfortran
cd ../contrib
./getmem.nwchem
 sudo chmod 755 /share/apps/nwchem/nwchem-6.3-src.2013-05-17 -R

Create a default.nwchemrc in /share/apps/nwchem
nwchem_basis_library /share/apps/nwchem/nwchem-6.3-src.2013-05-17/src/basis/libraries/ ffield amber amber_1 /share/apps/nwchem/nwchem-6.3-src.2013-05-17/src/data/amber_s/ amber_2 /share/apps/nwchem/nwchem-6.3-src.2013-05-17/src/data/amber_x/ amber_3 /share/apps/nwchem/nwchem-6.3-src.2013-05-17/src/data/amber_q/ amber_4 /share/apps/nwchem/nwchem-6.3-src.2013-05-17/src/data/amber_u/ amber_5 /share/apps/nwchem/nwchem-6.3-src.2013-05-17/src/data/custom/ spce /share/apps/nwchem/nwchem-6.3-src.2013-05-17/src/data/solvents/spce.rst charmm_s /share/apps/nwchem/nwchem-6.3-src.2013-05-17/src/data/charmm_s/ charmm_x /share/apps/nwchem/nwchem-6.3-src.2013-05-17/src/data/charmm_x/
and put symmlinks to it in the users' home directories, e.g.
cd ~
ln -s /share/apps/nwchem/default.nwchemrc .nwchemrc

20 March 2013

364. Setting up a new user on a ROCKS cluster

Because I keep forgetting about the rocks sync command...

From https://groups.google.com/forum/?fromgroups=#!topic/rocks-clusters/P6tvn_2Gk5Y

To add a new user to a ROCKS cluster and let them use Sun Grid Engine, do the following

sudo useradd -m verahill
sudo passwd verahill
su verahill
exit
sudo usermod -a -G compchem verahill
rocks sync users
qconf -auser verahill
1 name verahill 2 oticket 0 3 fshare 0 4 delete_time 0 5 default_project NONE

where compchem is a usergroup I've set up to give everyone access to the executables they need.

The first login, using su above, creates the .ssh directory and rsa/dsa keys.

Finally, to force the user to change their password on first login, do
chage -d 0 verahill

05 November 2012

275. Compiling Dalton 2011 on ROCKS 5.4.3/CentOS

I've previously struggled with Dalton 2.0-cam and given up. I somehow didn't know about Dalton 2011 at that point, but it turns out it's much easier to build. Well, I managed to build it on ROCKS/CentOS (gcc 4.1). I'm still working on the debian version which has a much newer gcc (4.7)

Before you get started you may want to compile ATLAS as shown here: http://verahill.blogspot.com.au/2012/09/rocks-543-atlas-and-gromacs-on-xeon.html

License:
First go to http://daltonprogram.org/licence/ and fill out the license agreement. Once that's done you'll get an automated email with a license form, which you should print, sign, scan and email to the email address you're given. Once your form has been processed you'll be sent another email with a user name and password. I received my user name and password the next business day.

Go online and download the source file, Dalton2011_release_v0.tgz, and put it in ~/tmp. Sort out where you want your program to end up
sudo mkdir /share/apps/dalton
sudo chown $USER /share/apps/dalton
mkdir /share/apps/dalton/bin /share/apps/dalton/basis /share/apps/dalton/lsdalton

Next,
cd ~/tmp
tar xvf Dalton2011_release_v0.tgz
cd Dalton2011_release/DALTON
./configure 

and answer all the questions:
------------------------------------------------------------------
   Configuring the DALTON Makefile.config and "dalton" run script
------------------------------------------------------------------

INFO: Operating system from 'uname -s' : Linux
INFO: Processor type   from 'uname -m' : x86_64
No architecture specified, attempting auto-configuration:
This appears to be a -linux architecture. Is this correct? [Y/n] 
--> Installing DALTON on a -linux computer


Note that 64-bit integers are desirable for Cholesky and very large
scale CI, otherwise the most important effect is that some files will be bigger.

If you choose 64-bit integers, be careful that any system library
routines (incl. MPI) also use 64-bit integers!

Do you want 64-bit integers? [y/N] Do you want to install the program in a parallel MPI version? [Y/n] 
-->WARNING: Makefiles for MPI architecture are difficult to guess
   Please compare the generated Makefile.config with local documentation.

   Checking for Fortran compiler ...
   from this list: mpif90 mpiifort ifort pgf95 pgf90 gfortran g95 

Compiler /opt/openmpi/bin/mpif90 found, use this compiler? [Y/n] 
-->Compiler mpif90 found and accepted.
Is backend compiler gfortran ? [Y/n] 
   Checking for C compiler ...
   from this list: mpicc  mpiicc   icc ecc pgcc gcc 

Compiler /opt/openmpi/bin/mpicc found, use this compiler? [Y/n] 
-->Compiler mpicc found and accepted.

Testing existence of libraries in this order:
 libacml.a libmkl.so libmkl_p3.a libatlas.a libblas.a
Directory search list for libraries:
  /state/partition1/home/me/tmp/ATLAS/build/lib /state/partition1/apps/ATLAS/lib /lib /usr/local/lib /usr/lib /usr/local/lib/ATLAS /lib64 /usr/lib64 /usr/local/lib64 

Do you want to replace this with your own directory search list? [y/N] Found /state/partition1/home/me/tmp/ATLAS/build/lib/libatlas.a, use it? [Y/n] Found /state/partition1/apps/ATLAS/lib/libatlas.a, use it? [Y/n] 
-->The following mathematical library(ies) will be used:
   -L/state/partition1/apps/ATLAS/lib -llapack -llapack -lf77blas -latlas


DALTON uses almost 100 Megabytes of static
allocations, in addition to the dynamic allocation.

DALTON has the possibility to reserve an amount of static memory
for storing two-electron integrals in direct and parallel calculations
Storing some or all of the 2-el. integrals in memory will speed up
direct and parallel calculations (and in particular the latter).
NOTE: This will increase the static memory allocation used by DALTON

Would you like to activate the possibility of storing 2-el.int. in memory? [y/N] How many MB to use for storing 2-el. integrals? 
-->Program will be installed with 500 MB (65000000 words) used for storing 2-el. integrals

Maximum amount of work memory for dynamic allocations can be changed
at run time with the environment variable WRKMEM (in REAL*8 words = megabytes/8)
or by using the -M option to the run script: "dalton -M mb ..." (in megabytes).
We recommend at least 200 MB work memory,
larger for correlated calculations, but it should for maximum
efficiency NOT exceed available physical memory per CPU in parallel calculations.

How many MB to use as default for work memory (hit return for default of 1000 MB)? 
-->Program will be installed with a default work memory of 900 MB (117000000 words)

-->Current directory is /home/me/tmp/Dalton2011_release/DALTON

Use default ../bin as installation directory for DALTON binaries and scripts? [Y/n] Please enter another installation directory: 
-->DALTON executable and script will be placed in /share/apps/dalton/test directory


-->Default basis set directory will be /home/me/tmp/Dalton2011_release/DALTON/../basis/

Use this directory as default basis set directory? [Y/n] 
Please choose another default basis set directory (must end with /) 
-->Default basis set directory will be /share/apps/dalton/basis/


I did not find /work, /scratch, /scr, or /temp. I will use /tmp

-->Job specific directories under $SCRATCH/$USER
-->will be used for temporary files when running DALTON

Use SCRATCH=/tmp as default root scratch space in "dalton" run script? [Y/n] 
-->Creating Makefile.config ...
gfortran version 412 prc=x86_64
INFO: Compiling with 32-bit integers.
INFO: Make sure pre-compiled BLAS, MPI etc. libraries are also with 32-bit integers!!!

Proper 64-bit file access detected.

-->Creating the DALTON run-script in /share/apps/dalton/test

   The configuration of DALTON has finished succesfully.
   Check compiler flags etc. in Makefile.config and run "make" to get executable.

Regardless of what you'll answer, here's an example of a Makefile.config that I used. The key is to add -I../modules to INCLUDES, and delete -fbacktrace.


ARCH        = linux
#
#
CPPFLAGS      = -DVAR_GFORTRAN -DSYS_LINUX -DVAR_MFDS -D'INSTALL_WRKMEM=117000000' -D'INSTALL_MMWORK=65000000' -D_FILE_OFFSET_BITS=64 -DVAR_MPI -DGFORTRAN=412 -DIMPLICIT_NONE
F90           = mpif90
CC            = mpicc
LOADER        = mpif90
RM            = rm -f
FFLAGS        = -march=x86-64 -O3 -ffast-math -funroll-loops -ftree-vectorize 
SAFEFFLAGS    = -march=x86-64 -O3 -ffast-math -funroll-loops -ftree-vectorize 
CFLAGS        = -march=x86-64 -O3 -ffast-math -funroll-loops -ftree-vectorize -std=c99 -DRESTRICT=restrict -DFUNDERSCORE=1
INCLUDES      = -I../include -I../modules
MODULES       = -J../modules
LIBS          = -L/state/partition1/apps/ATLAS/lib -llapack -llapack -lf77blas -latlas -L/opt/openmpi/lib -lmpi
INSTALLDIR    = /share/apps/dalton/test
PDPACK_EXTRAS = linpack.o eispack.o gp_zlapack.o gp_dlapack.o
GP_EXTRAS     = 
AR            = ar
ARFLAGS       = rvs
# flags for ftnchek on Dalton /hjaaj
CHEKFLAGS  = -nopure -nopretty -nocommon -nousage -noarray -notruncation -quiet  -noargumants -arguments=number  -usage=var-unitialized
# -usage=var-unitialized:arg-const-modified:arg-alias
# -usage=var-unitialized:var-set-unused:arg-unused:arg-const-modified:arg-alias
#
default : dalton linuxparallel.x
SAFE_FFLAGS_for_ifort = $(FFLAGS)
#
# Parallel initialization
#
MPI_INCLUDE_DIR = 
MPI_LIB_PATH    = 
MPI_LIB         = 
#
#
# Suffix rules
# hjaaj Oct 04: .g is a "cheat" suffix, for debugging.
#               'make x.g' will create x.o from x.F or x.c with -g debug flag set.
#
.SUFFIXES : .F .F90 .c .o .i .g .s

.F.o:
        $(F90) $(INCLUDES) $(MODULES) $(CPPFLAGS) $(FFLAGS) -c $*.F 

.F.i:
        $(F90) $(INCLUDES) $(MODULES) $(CPPFLAGS) -E $*.F > $*.i

.F.g:
        $(F90) $(INCLUDES) $(MODULES) $(CPPFLAGS) $(SAFEFFLAGS) -g -c $*.F 

.F.s:
        $(F90) $(INCLUDES) $(MODULES) $(CPPFLAGS) $(FFLAGS) -S -g -c $*.F 

.F90.o:
        $(F90) $(INCLUDES) $(MODULES) $(CPPFLAGS) $(FFLAGS) -c $*.F90 

.F90.i:
        $(F90) $(INCLUDES) $(MODULES) $(CPPFLAGS) -E $*.F90 > $*.i

.F90.g:
        $(F90) $(INCLUDES) $(MODULES) $(CPPFLAGS) $(SAFEFFLAGS) -g -c $*.F90 

.F90.s:
        $(F90) $(INCLUDES) $(MODULES) $(CPPFLAGS) $(FFLAGS) -S -g -c $*.F90 

.c.o:
        $(CC) $(INCLUDES) $(CPPFLAGS) $(CFLAGS) -c $*.c 

.c.i:
        $(CC) $(INCLUDES) $(CPPFLAGS) $(CFLAGS) -E $*.c > $*.i

.c.g:
        $(CC) $(INCLUDES) $(CPPFLAGS) $(CFLAGS) -g -c $*.c 

.c.s:
        $(CC) $(INCLUDES) $(CPPFLAGS) $(CFLAGS) -S -g -c $*.c 

 
If all is looking well, make.
make
cd ../
cp basis/* /share/apps/dalton/basis

DO NOT RUN MAKE IN PARALLEL i.e. no make -j3 or anything like that.
Add /share/apps/dalton/bin to your PATH i.e. add a line saying
export PATH=$PATH:/share/apps/dalton/bin
to your ~/.bashrc and source it.
So far I haven't had much time to look at it, but here's the result of the 'short' test series:
./TEST -dalton /share/apps/dalton/bin/dalton short 
[..]
#####################################################################
                              Summary
#####################################################################

THERE IS A PROBLEM IN TEST CASE(S)
 prop_exci prop_vibg2 walk_vibave2 dftmm_1
date and time         : Sun Nov  4 18:41:59 PST 2012

Here's what I found for each of the troublesome ones above:

prop_exci:
126:  INFO from READIN: Threshold for discarding integrals was    1.00D-16
127:  INFO from READIN: Threshold is reset to minimum value       1.00D-15
But otherwise it finished ok.

prop_vibg2:
 SIROUT stat info, IST and IEND =                   0                  -1
 IST or IEND out of bounds - probably no optimization in this run.
But otherwise it finished ok.

walk_vibave2:
3 informational messages have been issued by Dalton,
output from 'grep -n INFO'  (max 10 lines):
549: *** SETSIR-INFO, time in NSETUP:       0.00 seconds.
2346: *** SETSIR-INFO, time in NSETUP:       0.00 seconds.
3691: *** SETSIR-INFO, time in NSETUP:       0.00 seconds
But otherwise it finished ok.

dftmm_1:
 NOTE:    1 warnings have been issued.
 Check output, result, and error files for "WARNING".
dftmm_1.tar.gz has been copied to /home/me/tmp/Dalton2011_release/DALTON/test
----------------------------------------------------------
2 WARNINGS have been issued by Dalton,
output from 'grep -n -i WARNING'  (max 10 warnings):
711: NOTE:    1 warnings have been issued.
712: Check output, result, and error files for "WARNING".
I can't find the warning in the output, which looks like it finished ok.

All in all, it looks very promising.


Note on running in parallel
I had to do

mkdir /tmp/$USER
first.

In addition, when running I have to explicitly define my scratch directory:
dalton -t /tmp/$USER -N 4 myinput.dal myinput.mol
Other than that it's OK. I just get the overall impression that things aren't very stable (some jobs crash, some don't)

30 October 2012

272. Compiling NWChem 6.1.1.1 on ROCKS 5.4.3/CentOS 5.6

Nothing weird with this one and it's all but identical to the build on debian, but here's a step by step anyway to help those who are computational chemists, but not sysadmins.

Preparations:
First compile openblas according to http://verahill.blogspot.com.au/2012/05/building-nwchem-61-on-debian.html 

Next, create e.g. /share/apps/nwchem, like this
sudo mkdir /share/apps/nwchem
sudo chmod 755 /share/apps/nwchem

It will allows you to read, write and execute. It will allow group members and 'world' to read and execute, but not write.

If you've already built earlier versions of nwchem you want to skip the steps above.

NWChem:
You will need to go to http://www.nwchem-sw.org/index.php/Download and download version 6.1.1. Using the direct link (http://www.nwchem-sw.org/images/Nwchem-6.1.1-src.2012-06-27.tar.gz) with wget isn't working for me anymore.

Put your Nwchem-6.1.1-src.2012-06-27.tar.gz in /share/apps/nwchem and expand it.
tar xvf Nwchem-6.1.1-src.2012-06-27.tar.gz
cd nwchem-6.1.1-src/

Create buildconf.sh
export LARGE_FILES=TRUE
export TCGRSH=/usr/bin/ssh
export NWCHEM_TOP=`pwd`
export NWCHEM_TARGET=LINUX64
export NWCHEM_MODULES="all python"
export PYTHONHOME=/opt/rocks
export PYTHONVERSION=2.4
export USE_MPI=y
export USE_MPIF=y
export USE_MPIF4=y
export MPI_LOC=/opt/openmpi
export MPI_INCLUDE=/opt/openmpi/include
export LIBRARY_PATH=$LIBRARY_PATH:/opt/openmpi/lib:/share/apps/openblas
export LIBMPI="-lmpi -lopen-rte -lopen-pal -ldl -lmpi_f77 -lpthread"
export BLASOPT="-L/share/apps/openblas/lib -lopenblas -lopenblas_nehalem-r0.1.1 -lopenblas_nehalemp-r0.1.1"
cd $NWCHEM_TOP/src
export FC=gfortran
make clean
make  nwchem_config
make  FC=gfortran |tee make.log
cd ../contrib
./getmem.nwchem

Before running it, edit src/config/makefile.h and change line 1957:
1957      EXTRA_LIBS +=    -lnwcutil  -lpthread -lutil -ldl -lz -lssl
You are now ready to build.
time sh buildconf.sh

It took about 15 minutes to build -- a clear improvement over 6.1 for me (30 min+)

Create a default.nwchemrc in your /share/apps/nwchem/nwchem-6.1.1-src/ folder
nwchem_basis_library /share/apps/nwchem/nwchem-6.1.1-src/src/basis/libraries/
ffield amber
amber_1 /share/apps/nwchem/nwchem-6.1.1-src/src/data/amber_s/
amber_2 /share/apps/nwchem/nwchem-6.1.1-src/src/data/amber_x/
amber_3 /share/apps/nwchem/nwchem-6.1.1-src/src/data/amber_q/
amber_4 /share/apps/nwchem/nwchem-6.1.1-src/src/data/amber_u/
amber_5 /share/apps/nwchem/nwchem-6.1.1-src/src/data/custom/
spce /share/apps/nwchem/nwchem-6.1.1-src/src/data/solvents/spce.rst
charmm_s /share/apps/nwchem/nwchem-6.1.1-src/src/data/charmm_s/
charmm_x /share/apps/nwchem/nwchem-6.1.1-src/src/data/charmm_x/
Then each user can do
ln -s /share/apps/nwchem/nwchem-6.1.1-src/default.nwchemrc ~/.nwchemrc

You might also want to add nwchem to path -- add
export PATH=$PATH:/share/apps/nwchem/nwchem-6.1.1-src
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/openmpi/lib:/share/apps/openblas
to your ~/.bashrc

25 July 2012

216. Quantum Espresso on ROCKS 5.4.3 / CentOs 5.6 -- Almost got it.

Preamble:

I've seen just about any error message imaginable when compiling Quantum Espresso. You can most likely get around them by following this guide ad verbatim 
(no parallel environment detected, can't build fortran programs, no linker, parallel vs serial compiler etc.)

This compile is nothing like the easy-breezy one on debian, because of one reason: CentOS 5.6 is old. It's so old that the default gcc version can't compile QE. You'd encounter similar problems with an increasing number of software packages, including ABINIT.

      CentOS 5.6: gcc version 4.1.2 20080704 (Red Hat 4.1.2-50)
      Wheezy:       gcc version 4.7.1 (Debian 4.7.1-2)

The crusty compiler collection means we have to hammer out errors like these during compilation:


In file dfile_star.f90:28
    INTEGER,ALLOCATABLE ::  npert (:), irgq (:)
                      1
Error: Attribute at (1) is not allowed in a TYPE definition
 In file dfile_star.f90:37

    REAL(DP),ALLOCATABLE :: gi (:,:), gimq (:), eigen(:)
                       1
Error: Attribute at (1) is not allowed in a TYPE definition
 In file dfile_star.f90:42

    COMPLEX(DP),ALLOCATABLE :: u(:,:), t(:,:,:,:), tmq (:,:,:)
                          1
Error: Attribute at (1) is not allowed in a TYPE definition


Hence the requirement to first compile GCC 4.7, even if it takes a couple of hours. 

To the students out there swearing (and sweating) over their professors' insistence on using ROCKS/CentOS: compiling a compiler is not a wasted skill and just might be useful in building cross compilation environments for the time when you decide that science can take a hike and there's more money in working on setting up embedded systems. At least it'll buy you a little geek cred. 

Also, it seems that you need to use math and mpi libs compiled with the same compiler as you use to compile quantum espresso. So...lots of steps today as well!

Also, I haven't reported the configure bug for openmpi 1.6. Feel free to reproduce and report it.

I'm not happy that Quantum Espresso ignores the compile flags I've given it and defaults to things it should not default. I've had to jump through way too many hoops to be happy about this. OpenMPI not supporting program-suffix annoys me as well.

This is another massive compile...I'd rate it as my most annoying, troublesome and infuriating one yet.


1. Build GCC (gfortran, g++, gcc) 4.7 according to this post:
 http://verahill.blogspot.com.au/2012/07/compiling-gcc-471gfortran-471-on-centos.html
Make sure that
export PATH=$PATH:/share/apps/tools/gcc/gcc47/bin

2. Create target directory structure
sudo mkdir /share/apps/libs
sudo chown $USER /share/apps/lib

3. Recompile GNU OpenMP (libgomp)
It's probably easier you start with a freshly extracted gcc-4.7.1
cd ~/tmp/gcc
mkdir newgcc
cp gcc-4.7.1.tar.gz newgcc/
cd newgcc
tar xvf gcc-4.7.1
cd gcc-4.7.1/libgomp
mkdir build
cd build/
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/share/apps/tools/gcc/mpc/lib:/share/apps/tools/gcc/gmp/lib:/share/apps/tools/gcc/mpfr/lib
.././configure CC=gcc-gcc-4.7 CXX=g++-gcc-4.7 F77=gfortran-gcc-4.7 FC=gfortran-gcc-4.7 --prefix=/share/apps/tools/gcc/gcc47

md5sum /share/apps/tools/gcc/gcc47/lib/libgomp.so.1.0.0
23db68121aacf1e7d896704474e26350  /share/apps/tools/gcc/gcc47/lib/libgomp.so.1.0.0

make
make install

md5sum /share/apps/tools/gcc/gcc47/lib/libgomp.so.1.0.0
791a1ceb0ea4009e22703b82e8ce1ff4  /share/apps/tools/gcc/gcc47/lib/libgomp.so.1.0.0

4.  Compile OpenMPI using your new GCC 
cd ~/tmp/
wget http://www.open-mpi.org/software/ompi/v1.6/downloads/openmpi-1.6.tar.gz
tar xvf openmpi-1.6.tar.gz
cd openmpi-1.6
mkdir build
cd build/
export LD_LIBRARY_PATH=/share/apps/tools/gcc/gcc47/lib64:/share/apps/tools/gcc/gcc47/lib/gcc/x86_64-unknown-linux-gnu/4.7.1:/share/apps/tools/gcc/gmp/lib:/share/apps/tools/gcc/mpc/lib:/share/apps/tools/gcc/mpfr/lib

.././configure --prefix=/share/apps/libs/openmpi47 CC=gcc-gcc-4.7 FC=gfortran-gcc-4.7 F77=gfortran-gcc-4.7 F90=gfortran-gcc-4.7 CPP=cpp-gcc-4.7 CXX=g++-gcc-4.7 --with-sge LDFLAGS="-L/share/apps/tools/gcc/gcc47/lib64 -L/share/apps/tools/gcc/gcc47/lib/gcc/x86_64-unknown-linux-gnu/4.7.1 -L/share/apps/tools/gcc/gmp/lib -L/share/apps/tools/gcc/mpc/lib -L/share/apps/tools/gcc/mpfr/lib -L/share/apps/tools/gcc/gcc47/lib" CPPFLAGS="-I/share/apps/tools/gcc/mpc/include -I/share/apps/tools/gcc/gmp/include -I/share/apps/tools/gcc/mpfr/include -I/share/apps/tools/gcc/gcc47/lib/gcc/x86_64-unknown-linux-gnu/4.7.1/include"

** There's a bug in the openmpi configure script leading to config failure due to to an extra space being introduced in front of -L when testing the F77 compiler. Exporting the LD_LIBRARY_PATH is done to get around that. **

make
make install

cd /share/apps/libs/openmpi47bin/
ls mpi* |xargs -I {} ln -s {} {}-gcc-4.7
echo 'export PATH=$PATH:/share/apps/libs/openmpi47/bin' >> ~/.bashrc
source ~/.bashrc

cd /share/apps/libs/openmpi47share/openmpi/
ln -s mpif77-vt-wrapper-data.txt mpif77-gcc-4.7-vt-wrapper-data.txt
ln -s mpif90-vt-wrapper-data.txt mpif90-gcc-4.7-vt-wrapper-data.txt
ln -s mpicc-vt-wrapper-data.txt mpicc-gcc-4.7-vt-wrapper-data.txt
ln -s mpic++-vt-wrapper-data.txt mpic++-gcc-4.7vt--wrapper-data.txt

ln -s mpicxx-vt-wrapper-data.txt mpicxx-gcc-4.7-vt-wrapper-data.txt
ln -s mpif77-wrapper-data.txt mpif77-gcc-4.7-wrapper-data.txt
ln -s mpif90-wrapper-data.txt mpif90-gcc-4.7-wrapper-data.txt
ln -s mpicc-wrapper-data.txt mpicc-gcc-4.7-wrapper-data.txt
ln -s mpic++-wrapper-data.txt mpic++-gcc-4.7-wrapper-data.txt
ln -s mpicxx-wrapper-data.txt mpicxx-gcc-4.7-wrapper-data.txt


5. Compile FFT using your new GCC
cd ~/tmp
wget ftp://ftp.fftw.org/pub/fftw/fftw-3.3.2.tar.gz
tar xvf fftw-3.3.2.tar.gz

cd fftw-3.3.2/
./configure --enable-float --enable-mpi --enable-threads --with-pic --prefix=/share/apps/libs/fftw-3.3.2-gcc47/single CC=gcc-gcc-4.7 FC=gfortran-gcc-4.7 CPP=cpp-gcc-4.7 MPIF90=mpif90-gcc-4.7 F77=gfortran-gcc-4.7 MPICC=mpicc-gcc-4.7 MPIF77=mpif77-gcc-4.7

make
make install
make check

--------------------------------------------------------------
         FFTW threaded transforms passed basic tests!
--------------------------------------------------------------
..
Executing "mpirun -np 1 ..
..
make[3]: *** [check-local] Error 1
..

That mpirun won't work isn't that surprising since it's still point to /opt/openmpi/bin instead of our mpirun-gcc-4.7

make distclean
./configure --disable-float --enable-mpi --enable-threads --with-pic --prefix=/share/apps/libs/fftw-3.3.2-gcc47/double CC=gcc-gcc-4.7 FC=gfortran-gcc-4.7 CPP=cpp-gcc-4.7 MPIF90=mpif90-gcc-4.7 F77=gfortran-gcc-4.7 MPICC=mpicc-gcc-4.7 MPIF77=mpif77-gcc-4.7

make
make install
make check
--------------------------------------------------------------
         FFTW threaded transforms passed basic tests!
--------------------------------------------------------------
...followed by the same error as above.
It's fine.

6. Optional: Compile openblas using your new GCC
cd /share/apps/tools/gcc/binutils/bin
ln -s ar-gcc-4.7 gcc-ar

cd ~/tmp
wget http://nodeload.github.com/xianyi/OpenBLAS/tarball/v0.1.1
mv v0.1.1 openblas.tar.gz
tar xvf openblas.tar.gz
cd xianyi-OpenBLAS-e6e87a2/
wget http://www.netlib.org/lapack/lapack-3.4.1.tgz

export LD_LIBRARY_PATH=/share/apps/tools/gcc/gmp/lib:/share/apps/tools/gcc/mpc/lib:/share/apps/tools/gcc/mpfr/lib

make all BINARY=64 CC=gcc-gcc-4.7 FC=gfortran-gcc-4.7 MPICC=mpicc-gcc-4.7 USE_THREAD=0 INTERFACE64=1 

make PREFIX=/share/apps/libs/openblas47 install

cp libopenblas_* /share/apps/libs/openblas47/lib
cd /share/apps/libs/openblas47/lib
ln -s libopenblas_nehalem-r0.1.1.so libopenblas.so.1
ln -s libopenblas.so.1 libopenblas.so
ln -s libopenblas.so libopenblas.so.0
ln -s libopenblas_nehalem-r0.1.1.a libopenblas.a

7. Compile Quantum Espresso using your new GCC
sudo mkdir /share/apps/QE
sudo chown $USER /share/apps/QE
mkdir /share/apps/QE/lapack-3.2

mkdir ~/tmp/QE
cd ~/tmp/QE

wget http://qe-forge.org/frs/download.php/211/espresso-5.0.tar.gz
wget http://qe-forge.org/frs/download.php/214/PWgui-5.0.tgz
wget http://qe-forge.org/frs/download.php/204/xspectra-5.0.tar.gz

Edit environmental_variables
PREFIX=/share/apps/QE
TMP_DIR=/tmp/QE
PARA_PREFIX="mpirun-gcc-4.7 -n 8"

Time to configure and compile:
cd ~/tmp/QE
tar xvf espresso-5.0.tar.gz
cd espresso-5.0/
cp lapack-3.2 -R /share/apps/QE/
cp BLAS -R /share/apps/QE/

Create a script called compile.sh:

make clean
export PATH=""
export PATH="/usr/bin:/usr/sbin:/bin:/sbin:/usr/local/ecce/apps/scripts/share/apps/pvm/pvm3/bin/LINUX64:/share/apps/libs/openmpi47/bin:/share/apps/nwchem/nwchem-6.1/bin/LINUX64:/share/apps/sinfo/bin:/share/apps/sinfo/sbin:/share/apps/gromacs/bin:/share/apps/cmake/bin:/share/apps/tools/babel/bin:/share/apps/tools/htop/bin:/share/apps/tools/xmgrace/grace/bin:/share/apps/tools/rasmol/src:/share/apps/tools/strace/bin:/share/apps/tools/gcc/gcc47/bin:/share/apps/tools/gcc/binutils/bin"
which mpif90
read -p "press any key"
export CC=gcc-gcc-4.7
export FC=gfortran-gcc-4.7
export MPIF90=mpif90-gcc-4.7
export MPIF77=mpif77-gcc-4.7
export F77=gfortran-gcc-4.7
export F90=gfortran-gcc-4.7
export CPP=cpp-gcc-4.7
export CXX=g++-gcc-4.7
export CFLAGS=""
export DFLAGS="-DFFT_FFTW3"
export CPPFLAGS="-I/share/apps/tools/gcc/mpc/include -I/share/apps/tools/gcc/gmp/include -I/share/apps/tools/gcc/mpfr/include -I/share/apps/tools/gcc/gcc47/lib/gcc/x86_64-unknown-linux-gnu/4.7.1/include -I/share/apps/libs/openmpi47/include -I//share/apps/tools/gcc/gcc47/lib/gcc/x86_64-unknown-linux-gnu/4.7.1/include"
export LDFLAGS="-L/share/apps/tools/gcc/gcc47/lib -L/share/apps/tools/gcc/gcc47/lib64 -L/share/apps/tools/gcc/gcc47/lib/gcc/x86_64-unknown-linux-gnu/4.7.1 -L/share/apps/tools/gcc/gmp/lib -L/share/apps/tools/gcc/mpc/lib -L/share/apps/tools/gcc/mpfr/lib -L/share/apps/libs/openmpi47/lib -L/lib64 -lc"
#export BLAS_LIBS="-L/share/apps/libs/openblas47/lib -lopenblas" 
export BLAS_LIBS="/share/apps/QE/BLAS/blas.a"
export LAPACK_LIBS="/share/apps/QE/lapack-3.2/lapack.a"
export FFT_LIBS="-L/share/apps/libs/fftw-3.3.2-gcc47/double/lib -lfftw3 -lfftw3_mpi -lfftw3_threads"
export MPI_LIBS="-L/share/apps/libs/openmpi47/lib -l:/share/apps/libs/openmpi47/lib/libmpi.so"
export LD_LIBRARY_PATH=/share/apps/tools/gcc/gmp/lib:/share/apps/tools/gcc/mpfr/lib:/share/apps/tools/gcc/mpc/lib:/share/apps/libs/openmpi47/lib
./configure --prefix=/share/apps/QE/bin|tee conf.log
read -p "Press key to continue"
cd PW
make |tee make.log
cd ../
make all |tee makeall.log

Then run it using

sh compile.sh

Which gives:

--------------------------------------------------------------------
ESPRESSO can take advantage of several optimized numerical libraries
(essl, fftw, mkl...).  This configure script attempts to find them,
but may fail if they have been installed in non-standard locations.
If a required library is not found, the local copy will be compiled.

The following libraries have been found:
  BLAS_LIBS=/share/apps/QE/BLAS?blas.a
  LAPACK_LIBS=/share/apps/QE/lapack-3.2/lapack.a
  FFT_LIBS=-L/share/apps/libs/fftw-3.3.2-gcc47/double/lib -lfftw3 -lfftw3_mpi -lfftw3_threads
  MPI_LIBS=-L/share/apps/libs/openmpi47/lib -l:/share/apps/libs/openmpi47/lib/libmpi.so
Please check if this is what you expect.

If any libraries are missing, you may specify a list of directories
to search and retry, as follows:
  ./configure LIBDIRS="list of directories, separated by spaces"

Parallel environment detected successfully.\
Configured for compilation of parallel executables.

For more info, read the ESPRESSO User's Guide (Doc/users-guide.tex).
--------------------------------------------------------------------
configure: success

and more...Once it's over do

cd ~/tmp/QE/espresso-5.0
cp * -R /share/apps/QE/
echo 'export PATH=$PATH:/share/apps/QE/bin' >> ~/.bashrc
source ~/.bashrc

8. Testing PW -- STUCK
export LD_LIBRARY_PATH=/share/apps/tools/gcc/gmp/lib:/share/apps/tools/gcc/mpfr/lib:/share/apps/tools/gcc/mpc/lib:/share/apps/libs/openmpi47/lib:/share/apps/libs/openblas47/lib:/share/apps/tools/gcc/gcc47/lib64
cd /share/apps/QE/PW/examples/example02
./run_example

/share/apps/QE/PW/examples/example02 : starting

This example shows how to use pw.x to compute the equilibrium geometry
of a simple molecule, CO, and of an Al (001) slab.
In the latter case the relaxation is performed in two ways:
1) using the quasi-Newton BFGS algorithm
2) using a damped dynamics algorithm.

  executables directory: /share/apps/QE/bin
  pseudo directory:      /share/apps/QE/pseudo
  temporary directory:   /tmp/QE
  checking that needed directories and files exist...
Downloading O.pz-rrkjus.UPF to /share/apps/QE/pseudo...
Downloading C.pz-rrkjus.UPF to /share/apps/QE/pseudo... done

  running pw.x as: mpirun-gcc-4.7 -n 8  /share/apps/QE/bin/pw.x  

  cleaning /tmp/QE... done
  running the geometry relaxation for CO... from test_input_xml: Empty input file .. stopping
 from test_input_xml: Empty input file .. stopping
 from test_input_xml: Empty input file .. stopping
 from test_input_xml: Empty input file .. stopping
 from test_input_xml: Empty input file .. stopping
 from test_input_xml: Empty input file .. stopping
STOP 2
STOP 2
--------------------------------------------------------------------------
mpirun-gcc-4.7 noticed that the job aborted, but has no info as to the process
that caused that situation.
--------------------------------------------------------------------------
Error condition encountered during test: exit status = 2
Aborting

It's more insidious than that though -- pw.x crashes with "STOP 2" for any type of input.

NOTE: Like on Debian I had segfaults happening when using my own openblas (which works fine with gromacs and nwchem). Hence why I show you how to compile it, yet never use it.