NWChem - computational chemistry on parallel computers NWChem Home | Security & Privacy | PNNL

NWChem Known Bugs in versions 4.0 and 4.0.1

It is recommended that users use the defaults in NWChem. They have been set to provide maximum efficiency for most users. If you feel you must change one of the defaults, be aware that you are responsible for ensuring that the results are correct.

Below is a list of the known bugs in NWChem 4.0 and 4.0.1. If you believe that you have found bugs that are not listed here, please send your bug report using the correct channel listed in the Reporting Problems with NWChem page.

  1. Compile problem on Sun 4 processors with workshop 4.2
  2. Possible compile problems with True64 Compaq Fortran
  3. Cross-compiling problems on the IBM SP
  4. Direct MP2 optimizations
  5. PSPW does not work on HP
  6. PSPW simulations of charged systems in parallel
  7. Level shift is included in DFT unoccupied orbital energies
  8. QM MD restarts fail under MPI
  9. Failure reading in cartesian hessian from a previous frequency calculation
  10. MD restart files are overwritten
  11. Order of variables for the LINK card in the PDB file is incorrect
  12. Ring closure problems with DNA and RNA
  13. Distance restraint problem in the PREPARE module
  14. Sometimes COSMO gradients are only for the gas phase part of the calculation
  15. The COSMO code is getting a SEGV
  16. The COSMO code is getting a wrong answers when run with ECPs
  17. UDFT calculations with COSMO are having convergence problems
  18. DPLOT is not picking up the vectors from the first file name in the VECTORS directive
  19. I am getting a error related to bgj_get_scf_method when running MCSCF
  20. DFT Open-shell calculation with the Exchange functionals xpbe96 and xperdew91 produce wrong gradients
  21. Cannot restart DFT calculations running under NQE
  22. Direct MP2 numerical gradient fails
  23. Medium and large size calculation under Linux produce NaNs
  24. GM versions before 1.3 cause compilation errors
  25. DFT calculations on molecules containg ghost atoms fail
  26. I am getting an "There are insufficient internal variables" error
  27. DPLOT fails to compute negative spin densities from DFT wave-functions
  28. Non-sense numbers in very large SCF/DFT calculations with NWChem 4.01
  29. ESPs give wrong results for spherical basis sets or UHF
  30. Spin-orbit gradient calculation crashes
  31. Cannot use MPICH on RedHat 7.1 boxes
  32. Parallel runs are hanging
  33. Property calculation fails with a Segmentation Violation
  34. NWChem fails with a Segmentation Violation while reading input




Compile problem on Sun processors with workshop 4.2

The user will get a fatal error when compiling nwchem/src/util/errquit.F that can be resolved by removing the -DJOBTIMEPATH definition in the GNUmakefile in the util directory. A better solution is to upgrade to at least workshop 5.0.


Possible compile problems with True64 Compaq Fortran

Some versions of True64 Compaq Fortran default f77 to point to f90 and the compile will break because of problems with cpp and f90. This has been reported for V5.3-915. There are three known solutions:

  1. Change the f77 link to point to f77 instead of f90,
  2. Add the flag -old_f77 as the first flag in the FC definition of the DECOSF section of the $NWCHEM_TOP/src/config/makefile.h file, or
  3. Upgrade to version X5.3-1155 plus the patches at http://www6.compaq.com/fortran/dfau.html#updates.


Cross-compiling problems on the IBM SP

The PeIGS code is automatically picking up the architecture it is being compiled on for LAPI (SP) builds, thus causing problems with cross-compiling. This is now fixed in version 4.0.1.


Direct MP2 optimizations:

MP2 optimizations that are performed with the direct option can have problems with CPHF convergence. In this case, the user is advised to use the default options which is a semi-direct calculation.


PSPW does not work on HP

Currently, the PSPW code is giving wrong results on HP platforms. This is now fixed in version 4.0.1.


PSPW simulations of charged systems in parallel

PSPW simulations of charged systems (i.e. aperiodic simulation cell) give incorrect results when run in parallel. This is now fixed in version 4.0.1.


Level shift is included in DFT unoccupied orbital energies

DFT unoccupied orbital energies include the level shift value used to converge the wavefunction (if active). If you use these energies for any reason, you need to subtract out the level-shifting value. Check out the DFT Convergence section of the User Manual for more information about level-shifting. This is now changed in version 4.0.1.


QM MD restarts fail under MPI

An error will occur when trying to restart quantum mechanics (QM) molecular dynamics (MD) runs when the executable is compiled for MPI. The error looks like an an unexisting file error. This is now fixed in version 4.0.1.


Failure reading in cartesian hessian from a previous frequency calculation

When run in parallel, reading in of a cartesian hessian from a previous frequency calculation causes a segmentation fault. This is now fixed in version 4.0.1.


MD restart files are overwritten

Every time the prepare module was run, the restart file was written at the end. This also was done if this is not strictly needed. As a result the history of the restart file was lost. This does not cause anything to break, but during a set of simulations, the history is lost if, for example, the restart file was converted into a PDB formatted file. This is now fixed in version 4.0.1.


Order of variables for the LINK card in the PDB file is incorrect

The order of variables written for the LINK card in the PDB file is incorrect. Normally this card is written to the PDB file only when distances between a pair of link atoms of different segments is larger than the prepare module would be able to recognize as a bond. In that case, a LINK card is written so that subsequent use of the prepare module would force a bond between these atoms. With the incorrect LINK card the prepare module breaks. This is now fixed in version 4.0.1.


Ring closure problems with DNA and RNA

Some ring closure bonds were missing in the DNA and RNA fragments. This is now fixed in version 4.0.1.


Distance restraint problem in the PREPARE module

Distance restraint definitions were incorrect in the PREPARE module. This is now fixed in version 4.0.1.


Sometimes COSMO gradients are only for the gas phase part of the calculation

If the COSMO module is used and "numeric" is not added to the task line, the results are for the gas phase system. This effects gradient, optimization and hessian runs. This is now fixed in version 4.0.1.


The COSMO code is getting a SEGV

In all likelihood, you are using spherical basis sets in the calculation. This will fail since the code only works in cartesians. A fix is available in version 4.0.1 which causes the calculation to fail in a much more graceful manner.


The COSMO code is getting a wrong answers when run with ECPs

ECPs are currently not working with COSMO. A fix is available in version 4.0.1 which causes the calculati on to fail in a much more graceful manner.


UDFT calculations with COSMO are having convergence problems

This is associated with a bug in which memory was not being allocated properly. This is now fixed in version 4.0.1.


DPLOT is not picking up the vectors from the first file name in the VECTORS directive

The first file name is overwritten by the output vector name for SCF or DFT. This means that the MP2 orbitals could not be used. A fix for this problem is now available.


I am getting a error related to bgj_get_scf_method when running MCSCF

There is a problem with the code when only running MCSCF and not an SCF in the same calculation. The quick fix is to run the SCF before running the MCSCF. A fix for this problem is now available.


DFT Open-shell calculation with the Exchange functionals xpbe96 and xperdew91 produce wrong gradients

There is a problem with the code only when running open-shell calculations with these two functionals. A way to avoid this problem is to introduce the following line in the input file:

set "dft:weight derivatives" f

However, gradients are going to be less accurate since the XC grid derivatives will not be included. The user is encouraged to use the GRID keyword in the DFT block to increase the accuracy of the grid. A fix for this problem will be available for the next release.


Cannot restart DFT calculations running under NQE

This problem occurs when running under NQE (or NQS) and using as scratch_dir the NQE $TMPDIR. A way to avoid this problems is to introduce the folllowing line in the input file:

unset grid:filename

A fix for this problem will be available for the next release.


Direct MP2 numerical gradient fails

Direct MP2 numerical gradient calculations fail in NWChem 4.0 and 4.01. We suggest you to use semi-direct MP2 (task mp2) that has analytical gradients. A fix for this problem will be available for the next release.


Medium and large size calculation under Linux produce NaNs

2.2 Linux kernels are known to produce random wrong floating point arithmetic (eventually causing NaNs), see for example:

http://www.ccl.net/cgi-bin/ccl/message.cgi?2000+06+06+002
http://www.ccl.net/cgi-bin/ccl/message.cgi?2001+02+23+012
BUG: Global FPU corruption in 2.2
Re: BUG: Global FPU corruption in 2.2

We have experienced the same problems running NWChem with a 2.2.x kernel. This is likely due to FPU problems that has been fixed in kernel 2.2.20 and in the 2.4 series.
To fix the problem, we strongly suggest you to install 2.4.x kernels available at

http://www.kernel.org/pub/linux/kernel/v2.4/

or to upgrade your 2.2 kernel to version 2.2.20, whose source is available at

http://www.kernel.org/pub/linux/kernel/v2.2/linux-2.2.20.tar.gz


GM versions before 1.3 cause compilation errors

If you are using a version of GM more recent than 1.3, you are going to experience a compilation error, to avoid this you need to edit $NWCHEM_TOP/src/tools/armci/src/myrinet.c adding this line

#define GM_STRONG_TYPES 0

just before

#include "gm.h"


DFT calculations on molecules containg ghost atoms fail

Because of bugs in the new XC grid schem available in NWChem 4.0, DFT calculations on molecules containg ghost atoms fail.
To revert back to the old XC grid scheme and avoid the problem, insert the following line in the dft field

grid old

If you have the NWChem source code, you can apply the following patch (that you can save by clicking on it with the right mouse button) by following these steps

1) cd $NWCHEM_TOP/src/nwdft/grid

2) patch -p0 < gridbq.patch

3) make

4) cd ../..

5) make link

A fix for this problem will be available for the next release.


I am getting an "There are insufficient internal variables" error

If you have a chemical system that has "sensitive" internal coordinates, the B and B inverse code can have problems. By sensitive, we mean that a small change in one internal coordinate has a large effect on the cartesian or internal coordinates of other atoms.

A modification will be available for the next release which makes the code a bit less sensitive to this problem.

NOTE: Even with the modification, there still may be problems when using internal coordinates in a system that has sensitive internals when performing an optimization. The optimization may proceed slowly or there may be problems in calculating B inverse.


DPLOT fails to compute negative spin densities from DFT wave-functions

In molecules where the DFT wave-function has regions with negative spin density (eg antiferromagnetic systems), DPLOT produces zero spin density instead of the correct negative values.

A fix for this problem will be available for the next release.


Non-sense numbers in very large SCF/DFT calculations with NWChem 4.01

A bug introduced in NWChem 4.01 (but not in NWChem 4.0) can produced nonsense results in very large SCF/DFT calculations.

A fix for this problem will be available for the next release.


ESPs give incorrect results for spherical basis sets or UHF

The ESP module is restricted to use of cartesian basis sets, and for open shell systems to the use of restricted open shell Hartree-Fock (ROHF). This will either be fixed in the next release or error messages will be printed out and the job will stop.


Spin-orbit gradient calculation crashes

Spin-orbit gradient calculations can crash with the following error message:

Read molecular orbitals from ./tlh.movecs

MA_verify_allocator_stuff: starting scan ...

stack block 'deriv buffer', handle 8, address 0x6580b44:

current right signature 3891216469 != proper right signature 1431655765

MA_verify_allocator_stuff: scan completed

------------------------------------------------------------------------

movecs_read_so: pop failed 18

------------------------------------------------------------------------

------------------------------------------------------------------------

current input line :

166: task sodft gradient

A fix for this problem is now available.


Cannot use MPICH on RedHat 7.1 boxes

If you compile NWChem 4.01 on a Linux RedHat 7.1 box with MPICH as your message-passing library, you are likely to get this error message:

0:Segmentation Violation error, status=: 11

p0_2065: p4_error: : 11

bm_list_2066: p4_error: interrupt SIGINT: 2

p0_2065: (1.280453) net_send: could not write to fd=4, errno = 32

-10001(s):armci_rcv_req: failed to receive header : 0

A fix for this problem will be available for the next release. However, if you replace MPICH with LAM, the problem vanishes. LAM is available at http://www.lam-mpi.org.


Parallel runs are hanging

We have recently found a bug that can cause NWChem parallel runs to hang shortly after the following warning message is written

-10002(s):armci_rcv_req: failed to receive header : 0

A fix for this problem is available in version 3.1.8 of the Global Arrays library (released August 30th 2001) available at the following url

http://www.emsl.pnl.gov:2080/docs/global/distribution.html


Property calculation fails with a Segmentation Violation

If you run a property calculation on a large molecule your run might fail with the error message:

0:Segmentation Violation error, status=: 11

0:ARMCI aborting 11(b)

Some shells and basis function parameters in the property module are set to be smaller than the NWChem parameters. No explicit checks are performed to check for appropriate array dimensions, causing the code to overwrite array boundaries (and to crash).

A fix for this problem will be available for the next release.

For a temporary fix (when needed) contact the nwchem-support queue.


NWChem fails with a Segmentation Violation while reading long title input

NWChem fails on some platforms when it tries to process a title longer than 80 characters. For example, on the IBM SP it will fail with the error message:

0:Segmentation Violation error, status=: 11

0:ARMCI aborting 11(b)

A fix for this problem will be available in the next release.

To avoid the problem in the current release, use a title string that is 80 characters or less.


NWChem | Capabilities | Platforms | Download | User's Manual | Programmer's Manual | Release Notes | FAQ

Known Bugs | Support | Tutorial | Contributors | Benchmarks | Search | Mol Sci. Soft. Group | Citation

Contact: NWChem Support
Updated: March 8, 2005