Skip to content

Recent Topics
  • 29 Topics
    126 Posts

    Hi,

    On paper, it should be possible but we've been running some tests in the last few days before answering to your question and we have been experiencing some problems with the restart option of CPHF. We are going to run more tests and we will let you know as soon as we will have some updates.

  • 4 Topics
    15 Posts

    Hi,

    Hmmm...even though the SI file of the paper you have shared does point to using HSE06, it remains not entirely clear whether its parameters were kept default or altered (for all systems together or individually for instance).

    There are also differences between what different codes consider as "default" settings. For example, within the code used in the paper (VASP, nice code, no doubt), the defaults for HSE06 read (taken from https://www.vasp.at/wiki/index.php/List_of_hybrid_functionals) :
    $$ \omega= 0.2\ \mathring{A} , \quad c = 0.25, \quad \text{correlation}=\text{PBE}, $$ with the first number reading the range separation parameter (omega) and the second the fraction of exact exchange used (c).

    Within CRYSTAL (also nice code, no doubt), these read (taken from the manual, page 138):
    $$ \omega= 0.11\ a_0^{-1}, \quad c = 0.25, \quad \text{correlation}=\text{PBE}, $$ adopting the same labels.

    Not sure about the exact definition of units (perhaps a developer can comment if this is indeed Bohr radius as assumed?), but you can already see the subtle differences having to be taken into account when comparing between codes.

    A few other thoughts worth considering:

    In the paper, the structure was optimized with PBEsol and on top of that geometry HSE06 was applied as a single-point calculation. Not sure about the exact composition of those ZIFs, but the structural differences could play a significant role as well (planewave codes are very costly when optimizing a structure with hybrid functionals). Here is also a good read on this topic: doi.org/10.1088/2516-1075/aafc4b

    One final small comment. Within the PAW formalism implemented in VASP, scalar relativistic effects are included in the pseudopotentials by default. No problem, cool feature, but should be taken into account when comparing results, especially for heavier elements (longer discussion found here https://blog.vasp.at/forum/viewtopic.php?t=902)

    Hope this helps!

    Cheers,
    Aleks

  • Seek assistance, discuss troubleshooting tips for any technical problem you encounter and report bugs

    4 Topics
    24 Posts

    very grateful, I will run it again but not sure what to do here if it aborts again

  • Discuss tools and techniques for visualizing simulated data

    3 Topics
    14 Posts

    you're the best, thank you for going extra mile

  • Communications for the community and updates on upcoming events

    5 Topics
    6 Posts

    Dear CRYSTAL community,

    We’re excited to share our recent work on accelerating linear algebra operations in the CRYSTAL code using GPUs. Our implementation boosts the performance of self-consistent field (SCF) calculations by offloading key matrix operations like multiplication, diagonalization, inversion, and Cholesky decomposition to GPUs.

    In the manuscript, we first analyze the performance and limitations of the standard parallel version of the code (Pcrystal) and then we evaluate the scalability of the new GPU-accelerated approach with 1 to 8 GPUs, observing remarkable scaling. To highlight these improvements, we present benchmark results on different systems, such as the example below.

    post_forum_1.png

    We expected significant speedups for large systems due to the limited number of k points, each requiring substantial computational effort. To ensure a fair comparison, we ran calculations using the massively parallel version of CRYSTAL (MPPcrystal) on a large MOF structure with over 30000 basis functions. Surprisingly, a single GPU on one node performed comparably to 512–1024 CPU cores running across 4–8 nodes.

    To find out more, read the full paper here.

    We aim to make this GPU-accelerated version of CRYSTAL available in the upcoming release, allowing all users to benefit from its enhanced performance for large-scale simulations. We look forward to reading your thoughts and discussing potential applications or further improvements.

    A big thanks to Lorenzo DonĆ , Chiara Ribaldone, and Filippo Spiga for their contributions to the development of this code!

Suggested Topics

  • Forum Rules

    Welcome to the official forum for CRYSTAL software users! This is a space to share knowledge, find support, and connect with others interested in solid-state simulations. To maintain a productive and respectful environment, we ask all members to adhere to the following rules...

  • CRYSTALClear

    CRYSTALClear is an open source project that provides an easy Python interface with CRYSTAL. The package allows you to quickly extract information from the CRYSTAL output files and to easily generate customizable plots...


Top Users

0

Online

148

Users

45

Topics

185

Posts