Skip to content
  • Home
  • Recent
Collapse
Brand Logo
CRYSTAL23
Latest v1.0.1
Tutorials Try the Demo Get a License
Tutorials Try the Demo Get a License Instagram
job314undefined

Jonas Baltrusaitis

@job314
About
Posts
63
Topics
17
Groups
0
Followers
1
Following
0

Posts

Recent Best Controversial

  • ERROR **** PGGP **** G-VECTOR NOT FOUND IN PREVIOUS DENSITY MATRIX
    job314undefined job314

    I just go impatient and started the calculation from scratch again... It's about to finish... That has been my general experience with restarts, something goes wrong and it is easier to restart the whole calculation... This is because when I restart - and I just tried that for a different job - SCFOUT calculations become conducting and won't converge anymore whereas if I start form scratch, everything works


  • input statement requires too much data, unit 81
    job314undefined job314

    I tried rerunning it with fewer nodes - thought it is some parallel issue. A problem again

    ANGULAR INTEGRATION - INTERVALS (ACCURACY LEVEL [N. POINTS] UPPER LIMIT):
    1( 4[ 86] 0.2) 2( 8[ 194] 0.5) 3( 12[ 350] 0.9) 4( 16[ 974] 3.5)
    5( 12[ 350]9999.0)
    CYCLE 0 ALPHA 227.814788 EPSILON 1.894274 DELTA 2.2781E+02
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT MOQGAD TELAPSE 1902.88 TCPU 1885.42
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT CP_MONMON TELAPSE 1904.27 TCPU 1886.79
    DIIS TEST: 0.61205E+01 AT CPHF CYCLE 1 - MIX 60 %
    CYCLE 1 ALPHA 257.133404 EPSILON 2.009363 DELTA 2.9319E+01
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT MOQGAD TELAPSE 2002.77 TCPU 1984.78
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT CP_MONMON TELAPSE 2004.16 TCPU 1986.16
    DIIS TEST: 0.71887E+01 AT CPHF CYCLE 2 - DIIS ACTIVE - HISTORY: 2 CYCLES
    CYCLE 2 ALPHA 268.265588 EPSILON 2.053062 DELTA 1.1132E+01
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT MOQGAD TELAPSE 2102.04 TCPU 2083.54
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT CP_MONMON TELAPSE 2103.42 TCPU 2084.92
    DIIS TEST: 0.36370E+00 AT CPHF CYCLE 3 - DIIS ACTIVE - HISTORY: 3 CYCLES
    CYCLE 3 ALPHA 276.769385 EPSILON 2.086443 DELTA 8.5038E+00
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT MOQGAD TELAPSE 2202.03 TCPU 2183.04
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT CP_MONMON TELAPSE 2203.42 TCPU 2184.42
    DIIS TEST: 0.54051E-01 AT CPHF CYCLE 4 - DIIS ACTIVE - HISTORY: 4 CYCLES
    CYCLE 4 ALPHA 278.095061 EPSILON 2.091647 DELTA 1.3257E+00
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT MOQGAD TELAPSE 2302.12 TCPU 2282.64
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT CP_MONMON TELAPSE 2303.51 TCPU 2284.02
    DIIS TEST: 0.85023E-02 AT CPHF CYCLE 5 - DIIS ACTIVE - HISTORY: 5 CYCLES
    CYCLE 5 ALPHA 278.435921 EPSILON 2.092985 DELTA 3.4086E-01
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT MOQGAD TELAPSE 2402.16 TCPU 2382.20
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT CP_MONMON TELAPSE 2403.54 TCPU 2383.57
    DIIS TEST: 0.38480E-03 AT CPHF CYCLE 6 - DIIS ACTIVE - HISTORY: 6 CYCLES
    CYCLE 6 ALPHA 278.461661 EPSILON 2.093086 DELTA 2.5739E-02
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT MOQGAD TELAPSE 2502.06 TCPU 2481.62
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT CP_MONMON TELAPSE 2503.45 TCPU 2482.99
    DIIS TEST: 0.44991E-03 AT CPHF CYCLE 7 - DIIS ACTIVE - HISTORY: 7 CYCLES
    CYCLE 7 ALPHA 278.460154 EPSILON 2.093080 DELTA -1.5071E-03
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT MOQGAD TELAPSE 2601.70 TCPU 2580.74
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT CP_MONMON TELAPSE 2603.08 TCPU 2582.11
    DIIS TEST: 0.36243E-03 AT CPHF CYCLE 8 - DIIS ACTIVE - HISTORY: 8 CYCLES
    CYCLE 8 ALPHA 278.473843 EPSILON 2.093134 DELTA 1.3689E-02
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT MOQGAD TELAPSE 2701.77 TCPU 2680.26
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT CP_MONMON TELAPSE 2703.15 TCPU 2681.62
    DIIS TEST: 0.85073E-04 AT CPHF CYCLE 9 - DIIS ACTIVE - HISTORY: 9 CYCLES
    CYCLE 9 ALPHA 278.474328 EPSILON 2.093136 DELTA 4.8487E-04
    forrtl: severe (256): unformatted I/O to unit open for formatted transfers, unit 85, file /dev/null
    Image PC Routine Line Source
    Pcrystal 0000000007374206 Unknown Unknown Unknown
    Pcrystal 0000000001BA179E Unknown Unknown Unknown
    Pcrystal 0000000000A8038B Unknown Unknown Unknown
    Pcrystal 0000000000A63D97 Unknown Unknown Unknown
    Pcrystal 0000000000D4DAD1 Unknown Unknown Unknown
    Pcrystal 000000000074B942 Unknown Unknown Unknown
    Pcrystal 000000000040591E Unknown Unknown Unknown
    Pcrystal 00000000004053FD Unknown Unknown Unknown
    libc.so.6 000014B7C14295D0 Unknown Unknown Unknown
    libc.so.6 000014B7C1429680 __libc_start_main Unknown Unknown
    Pcrystal 0000000000405315 Unknown Unknown Unknown


  • input statement requires too much data, unit 81
    job314undefined job314

    INPUT.d12
    fort.f34


  • input statement requires too much data, unit 81
    job314undefined job314

    I am sorry, I am having a bad stretch with frequency calculations

    SIZE OF GRID= 1268364
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT MAKE_GRID2 TELAPSE 3087.73 TCPU 3042.39
    BECKE WEIGHT FUNCTION
    RADSAFE = 2.00
    TOLERANCES - DENSITY:10**- 6; POTENTIAL:10**- 9; GRID WGT:10**-14

    RADIAL INTEGRATION - INTERVALS (POINTS,UPPER LIMIT): 1( 75, 4.0*R)

    ANGULAR INTEGRATION - INTERVALS (ACCURACY LEVEL [N. POINTS] UPPER LIMIT):
    1( 4[ 86] 0.2) 2( 8[ 194] 0.5) 3( 12[ 350] 0.9) 4( 16[ 974] 3.5)
    5( 12[ 350]9999.0)
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT MOQGAD TELAPSE 3150.14 TCPU 3104.48
    forrtl: severe (67): input statement requires too much data, unit 81, file /90daydata/urea_kinetics/struvite/camB3LYP_pobTZVP/restart/fort.81.pe150
    Image PC Routine Line Source
    Pcrystal 0000000007361152 Unknown Unknown Unknown
    Pcrystal 000000000735F4F6 Unknown Unknown Unknown
    Pcrystal 0000000001BA1663 Unknown Unknown Unknown
    Pcrystal 0000000000AED67B Unknown Unknown Unknown
    Pcrystal 0000000000A8D803 Unknown Unknown Unknown
    Pcrystal 0000000000ADD31F Unknown Unknown Unknown
    Pcrystal 0000000000A62692 Unknown Unknown Unknown
    Pcrystal 0000000000D4DAD1 Unknown Unknown Unknown
    Pcrystal 000000000074B942 Unknown Unknown Unknown


  • Error in RESTART of FREQCALC calculation
    job314undefined job314

    HI all, admittedly I am running these large frequency jobs and they run out of queue and I can't restart them, there is always some problem, this one is io error, hard to troubleshoot

    FREQINFO.DAT output.out TENS_RAMAN.DAT INPUT.d12


  • ERROR **** PGGP **** G-VECTOR NOT FOUND IN PREVIOUS DENSITY MATRIX
    job314undefined job314

    here are the files, again original run was FREQCALC with geometry optimization, for restart I deleted optimization keywords and added RESTART
    optc045.f34
    INPUT.d12
    output.out


  • ERROR **** PGGP **** G-VECTOR NOT FOUND IN PREVIOUS DENSITY MATRIX
    job314undefined job314

    HI all, I am trying to restart Raman calculation since it ran out of time. I am getting this strange error, all I did was I added RESTART keyword and initially it ran OK

    I think the problem is somewhere since I ran the frequency with full optimization calculation. Then I deleted optimization keywords from FREQCALC and added RESTART. See the files, is there a way to restart it?

    ERROR **** PGGP **** G-VECTOR NOT FOUND IN PREVIOUS DENSITY MATRIX

    HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
    FORCE CONSTANT MATRIX - NUMERICAL ESTIMATE
    HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
    MAX ABS(DGRAD): MAXIMUM ABSOLUTE GRADIENT DIFFERENCE WITH RESPECT TO
    THE CENTRAL POINT
    DE: ENERGY DIFFERENCE WITH RESPECT TO THE CENTRAL POINT
    (DE IS EXPECTED TO BE POSITIVE FOR ALL DISPLACEMENTS)

    ATOM MAX ABS(DGRAD) TOTAL ENERGY (AU) N.CYC DE SYM
    CENTRAL POINT -6.939500764932E+03 0 0.0000E+00 8
    1 P DX RESTORED FROM A PREV. HESS. MATRIX
    1 P DY RESTORED FROM A PREV. HESS. MATRIX
    1 P DZ RESTORED FROM A PREV. HESS. MATRIX
    9 O DX RESTORED FROM A PREV. HESS. MATRIX
    9 O DY RESTORED FROM A PREV. HESS. MATRIX
    9 O DZ RESTORED FROM A PREV. HESS. MATRIX
    17 O DX RESTORED FROM A PREV. HESS. MATRIX
    17 O DY RESTORED FROM A PREV. HESS. MATRIX
    17 O DZ RESTORED FROM A PREV. HESS. MATRIX
    25 O DX RESTORED FROM A PREV. HESS. MATRIX
    25 O DY RESTORED FROM A PREV. HESS. MATRIX
    25 O DZ RESTORED FROM A PREV. HESS. MATRIX
    33 O DX RESTORED FROM A PREV. HESS. MATRIX
    33 O DY RESTORED FROM A PREV. HESS. MATRIX
    33 O DZ RESTORED FROM A PREV. HESS. MATRIX
    41 O DX RESTORED FROM A PREV. HESS. MATRIX
    41 O DY RESTORED FROM A PREV. HESS. MATRIX
    41 O DZ RESTORED FROM A PREV. HESS. MATRIX
    49 N DX RESTORED FROM A PREV. HESS. MATRIX
    49 N DY RESTORED FROM A PREV. HESS. MATRIX
    49 N DZ RESTORED FROM A PREV. HESS. MATRIX
    57 N DX RESTORED FROM A PREV. HESS. MATRIX
    57 N DY RESTORED FROM A PREV. HESS. MATRIX
    57 N DZ RESTORED FROM A PREV. HESS. MATRIX
    65 C DX RESTORED FROM A PREV. HESS. MATRIX
    65 C DY RESTORED FROM A PREV. HESS. MATRIX
    65 C DZ RESTORED FROM A PREV. HESS. MATRIX
    73 H DX RESTORED FROM A PREV. HESS. MATRIX
    73 H DY RESTORED FROM A PREV. HESS. MATRIX
    73 H DZ RESTORED FROM A PREV. HESS. MATRIX
    Abort(1) on node 0 (rank 0 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0


  • PCrystal job stuck when run between several nodes
    job314undefined job314

    OK, here we go. It just is stuck, always the same position in the output

    (ceres20-compute-46:0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95)
    (ceres24-compute-18:96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191)

    export TMPDIR=/local/bgfs/jonas.baltrusaitis/15383115
    export TMOUT=5400
    export SINGULARITY_TMPDIR=/local/bgfs/jonas.baltrusaitis/15383115


    MAX NUMBER OF SCF CYCLES 200 CONVERGENCE ON DELTAP 10**-20
    WEIGHT OF F(I) IN F(I+1) 30% CONVERGENCE ON ENERGY 10**-10
    SHRINK. FACT.(MONKH.) 6 6 6 NUMBER OF K POINTS IN THE IBZ 64
    SHRINKING FACTOR(GILAT NET) 6 NUMBER OF K POINTS(GILAT NET) 64


    *** K POINTS COORDINATES (OBLIQUE COORDINATES IN UNITS OF IS = 6)
    1-R( 0 0 0) 2-C( 1 0 0) 3-C( 2 0 0) 4-R( 3 0 0)
    5-C( 0 1 0) 6-C( 1 1 0) 7-C( 2 1 0) 8-C( 3 1 0)
    9-C( 0 2 0) 10-C( 1 2 0) 11-C( 2 2 0) 12-C( 3 2 0)
    13-R( 0 3 0) 14-C( 1 3 0) 15-C( 2 3 0) 16-R( 3 3 0)
    17-C( 0 0 1) 18-C( 1 0 1) 19-C( 2 0 1) 20-C( 3 0 1)
    21-C( 0 1 1) 22-C( 1 1 1) 23-C( 2 1 1) 24-C( 3 1 1)
    25-C( 0 2 1) 26-C( 1 2 1) 27-C( 2 2 1) 28-C( 3 2 1)
    29-C( 0 3 1) 30-C( 1 3 1) 31-C( 2 3 1) 32-C( 3 3 1)
    33-C( 0 0 2) 34-C( 1 0 2) 35-C( 2 0 2) 36-C( 3 0 2)
    37-C( 0 1 2) 38-C( 1 1 2) 39-C( 2 1 2) 40-C( 3 1 2)
    41-C( 0 2 2) 42-C( 1 2 2) 43-C( 2 2 2) 44-C( 3 2 2)
    45-C( 0 3 2) 46-C( 1 3 2) 47-C( 2 3 2) 48-C( 3 3 2)
    49-R( 0 0 3) 50-C( 1 0 3) 51-C( 2 0 3) 52-R( 3 0 3)
    53-C( 0 1 3) 54-C( 1 1 3) 55-C( 2 1 3) 56-C( 3 1 3)
    57-C( 0 2 3) 58-C( 1 2 3) 59-C( 2 2 3) 60-C( 3 2 3)
    61-R( 0 3 3) 62-C( 1 3 3) 63-C( 2 3 3) 64-R( 3 3 3)

    DIRECT LATTICE VECTORS COMPON. (A.U.) RECIP. LATTICE VECTORS COMPON. (A.U.)
    X Y Z X Y Z
    13.1430453 0.0000000 0.0000000 0.4780616 0.0000000 0.0000000
    0.0000000 11.6066979 0.0000000 0.0000000 0.5413413 0.0000000
    0.0000000 0.0000000 21.1989478 0.0000000 0.0000000 0.2963914

    DISK SPACE FOR EIGENVECTORS (FTN 10) 53868000 REALS

    SYMMETRY ADAPTION OF THE BLOCH FUNCTIONS ENABLED
    TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT gordsh1 TELAPSE 186.18 TCPU 45.44


  • PCrystal job stuck when run between several nodes
    job314undefined job314

    It only returns this when I test bindings

    /var/spool/slurmd/job15381998/slurm_script: line 17: MPI_processes: No such file or directory
    /var/spool/slurmd/job15381998/slurm_script: line 18: MPI_processes: No such file or directory

    #!/bin/bash
    #SBATCH --nodes=2
    #SBATCH --tasks-per-node=96
    #SBATCH -t 140:00:00
    #SBATCH -o vasp.out
    #SBATCH -e vasp.err
    #SBATCH -p ceres
    #SBATCH --export=ALL
    #SBATCH --mail-type=ALL
    #SBATCH [email protected]
    #SBATCH -J /90daydata/urea_kinetics/struvite/camB3LYP_pobTZVP

    module unload intel
    module load crystal

    mpirun --report-bindings -np <MPI_processes> /project/urea_kinetics/CRYSTAL/1.0.1/bin/Pcrystal
    mpirun -print-rank-map -np <MPI_processes> /project/urea_kinetics/CRYSTAL/1.0.1/bin/Pcrystal

    mpirun -np 192 Pcrystal < INPUT


  • PCrystal job stuck when run between several nodes
    job314undefined job314

    I tried running on 4 nodes, it got stuck, I changed to 2 nodes, and it is working. This uncertainty is what bothers me. I will try testing the binding and report here


  • PCrystal job stuck when run between several nodes
    job314undefined job314

    Hi Giacomo, I am well aware how to run PCrystal. Been doing that for many years. For this particular HPC, however, I encounter the problem I described below. That also happens to VASP and I am looking on solutions.


  • PCrystal job stuck when run between several nodes
    job314undefined job314

    Colleagues, this is likely not PCrystal but my HPC problem. When I run PCrystal between several nodes, it always stops at the same point below and just sits there. Job is clearly running if I squeue but no progress is made for many hours, output is not being updated since it reached the point below. I kill it, run it on a single node and it runs no problem. I SSH into the node, I see all the cores occupied with PCrystal nicely.

    Where can it be a problem?


    MAX NUMBER OF SCF CYCLES 200 CONVERGENCE ON DELTAP 10**-16
    WEIGHT OF F(I) IN F(I+1) 30% CONVERGENCE ON ENERGY 10**-10
    SHRINK. FACT.(MONKH.) 4 4 4 NUMBER OF K POINTS IN THE IBZ 30
    SHRINKING FACTOR(GILAT NET) 4 NUMBER OF K POINTS(GILAT NET) 30


    *** K POINTS COORDINATES (OBLIQUE COORDINATES IN UNITS OF IS = 4)
    1-R( 0 0 0) 2-C( 1 0 0) 3-R( 2 0 0) 4-C( 0 1 0)
    5-C( 1 1 0) 6-C( 2 1 0) 7-R( 0 2 0) 8-C( 1 2 0)
    9-R( 2 2 0) 10-C( 0 0 1) 11-C( 1 0 1) 12-C( 2 0 1)
    13-C( 3 0 1) 14-C( 0 1 1) 15-C( 1 1 1) 16-C( 2 1 1)
    17-C( 3 1 1) 18-C( 0 2 1) 19-C( 1 2 1) 20-C( 2 2 1)
    21-C( 3 2 1) 22-R( 0 0 2) 23-C( 1 0 2) 24-R( 2 0 2)
    25-C( 0 1 2) 26-C( 1 1 2) 27-C( 2 1 2) 28-R( 0 2 2)
    29-C( 1 2 2) 30-R( 2 2 2)

    DIRECT LATTICE VECTORS COMPON. (A.U.) RECIP. LATTICE VECTORS COMPON. (A.U.)
    X Y Z X Y Z
    17.8044525 0.0000000 -0.1043147 0.3535604 -0.0000000 0.1127826
    0.0000000 24.2698591 -0.0000000 -0.0000000 0.2588884 -0.0000000
    -3.5260288 0.0000000 11.0536983 0.0033366 0.0000000 0.5694882

    DISK SPACE FOR EIGENVECTORS (FTN 10) 44204368 REALS

    SYMMETRY ADAPTION OF THE BLOCH FUNCTIONS ENABLED

    #!/bin/bash
    #SBATCH --nodes=4
    #SBATCH --tasks-per-node=96
    #SBATCH -t 100:00:00
    #SBATCH -o vasp.out
    #SBATCH -e vasp.err
    #SBATCH -p ceres
    #SBATCH --export=ALL
    #SBATCH --mail-type=ALL
    #SBATCH [email protected]
    #SBATCH -J /90daydata/urea_kinetics/MgNH4SO4/camB3LYP_pob2TZVP/Raman

    module unload intel
    module load crystal
    mpirun -np 384 Pcrystal < INPUT


  • optimized EOS coordinates and final CVOLOPT
    job314undefined job314

    OK, so to me that makes EOS completely redundant... I just used the final structure of the first optimization cycle before it started EOS and calculating Raman now


  • optimized EOS coordinates and final CVOLOPT
    job314undefined job314

    Yes, we finally approach the same idea. Yes, FULOPTG - at least in other software packages FULLOPTG equivalent is rarely done, cell shape optimization is done systematically similarly to EOS. See example above what is done - a series of scans where volume is optimized and different deviation from 1.0 lattice parameters. VASP does that with ISFI=4.

    But do you expect FULLOPTG result in lattice parameters that closely correspond those of the minimum in fitted EOS?


  • optimized EOS coordinates and final CVOLOPT
    job314undefined job314

    I suppose I am trying to do something like this with EOS, then use that minimum set of lattice parameters (minimum energy corresponding volume) and use it in all of my Raman calculations

    https://www.vasp.at/wiki/index.php/Fcc_Si


  • optimized EOS coordinates and final CVOLOPT
    job314undefined job314

    In other words, I am looking for something similar to FULLOPTG but I want to do it via systematic scanning of lattice parameters, thus EOS


  • optimized EOS coordinates and final CVOLOPT
    job314undefined job314

    HI Alessandro, I need to think about it. OPTGEOM will perform something at lattice parameters that are 1.0 with respect to EOS. Those are not optimal. I am looking for lowest global energy structure with optimal volume. I will revert to you with an example


  • optimized EOS coordinates and final CVOLOPT
    job314undefined job314

    In other words, it always prints coordinates for the optimized minima at each point in EOS (e.g. points corresponding 0.94, 0.96, 0.98 etc EOS optimizations). I do not believe (or I can't find at least) that it then prints coordinates and lattice parameters for that global minimum that it calculates between those EOS points that is exact EOS minimum


  • P 21/a symmetry not found
    job314undefined job314

    My initial confusion was that I entered

    0 0 0
    14

    and that led to problems. Today I rectified, right before your email as

    CRYSTAL
    1 0 0
    P 1 21/a 1


  • P 21/a symmetry not found
    job314undefined job314

    I think I got it but I do not know how to explain it. I set it up as below after lots of trial and error

    MgNH4SO4*6H2O from Mg, Acta Crystallographica 17 (1964) 1478-1479
    CRYSTAL
    1 0 0
    P 1 21/a 1
    9.383 12.669 6.220 107.05
    20
    7 0.1321 0.3509 0.3611
    1 0.058 0.337 0.225
    1 0.208 0.305 0.394
    1 0.095 0.344 0.487
    1 0.174 0.421 0.346
    12 0 0 0
    8 0.1603 -0.1094 -0.0307
    8 0.1685 0.1042 0.1656
    8 -0.0017 -0.0687 0.2986
    1 0.2 0.091 0.317
    1 0.227 0.134 0.116
    1 0.252 -0.096 0.059
    1 0.143 -0.176 -0.008
    1 -0.097 -0.066 0.341
    1 0.027 -0.135 0.325
    16 0.0953 -0.3605 0.2575
    8 -0.0469 -0.4174 0.2116
    8 0.2185 -0.4328 0.3718
    8 0.1185 -0.3211 0.0456
    8 0.0951 -0.2702 0.4089
    EOS
    RANGE
    0.94 1.06 8
    PREOPTGEOM
    MAXCYCLE
    500
    END
    BASISSET
    pob-TZVP-rev2
    DFT
    B3LYP-D3
    XLGRID
    END
    TOLINTEG
    8 8 8 8 16
    SHRINK
    4 4
    BIPOSIZE
    41202400
    EXCHSIZE
    41202400
    END

  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Home
  • Recent