This page covers advanced configuration options. For basic settings that have to be specified upon installation, please refer to the mandatory configuration section of the previous page.
Changing the Configuration¶
The iserver application provides a variety of configuration options that can
be set via a text file named
application.properties. This configuration file can be located either in the same folder as
the server executable (iserver) or in a dedicated
/config subdirectory of the current working
directory. If both are present, the latter option has the higher priority, which allows
different users to easily start the server application with their own configuration.
It is required to restart the server after changing the configuration file for the changes to take effect.
Scheduler and Job-Splitting Parameters¶
The first and most important setting concerns the cluster resource management system (scheduler) in use. If this is not SGE, this is a mandatory setting, as described here (PBS/TORQUE) and here (SLURM).
# Possible values: sge, pbs, torque, slurm scheduler=sge
The next settings specifies how many resource manager slots should be used for a single virtual screening sub-job. This usually translates to processor cores. This setting can be overriden by the user when sending a remote screening request from LigandScout or KNIME.
Similar to virtual screening, it is also possible to specify how many resource manager slots should be used for a single conformer generation sub-job. This setting can be overriden by the user when sending a remote conformer generation request from LigandScout or KNIME.
All supported schedulers allow to specifiy a priority when submitted a new job into the queue. The higher this value is, the less priority the job has. In the case of SGE, PBS, and TORQUE the value has to be in the range 0 to 1023. The iserver application will set the given value as negative priority, no matter which sign is used.
Defines or redefines the priority of the job relative to other jobs. Priority is an integer in the range -1023 to 1024. The default priority value for jobs is 0.
This statement is valid for SGE, PBS, and TORQUE.
In the case of SLURM, the value will be set via the
--nice parameter when submitting a new job. A value from 0 (highest priority) to 2147483645 (least priority) is allowed. The iserver application will set the given value as positive priority, no matter which sign is used.
This setting can be overriden by the user when sending a remote screening request from LigandScout or KNIME.
It is not possible to increase the job priority with this setting. It is recommended to leave this value at 0 or increase it when jobs started via iserver should have less than default priority.
When using an SGE resource manager, it is required to specify a correct parallel environment. The iserver application does not support MPI, therefore a shared-memory environment has to be set. Usually, this is
smp, which is also the default setting in the iserver configuration:
The queue that should be used for submitting jobs to SGE can also be set. The specified queue name will be used when detecting the number of currently available slots. Many SGE installations use only one queue (
all.q), in this case, it is not required to change the iserver configuration.
The next option can be used to pass any arguments to an SGE scheduling system.
This option is used to set the SLURM partition used for job submission. If no value is set, jobs will be submitted to the default partition.
This option can be used to pass any arguments to a SLURM scheduling system.
The iscreen command-line tool is used for submitting virtual screening jobs to the cluster resource manager. The following options can be used for specifying the resources these iscreen jobs should be allowed to consum. Practical evaluation has showed that the best performance is usually achieved when using around 1.5 to 2 times as many iscreen cores as scheduler slots. This is because not all jobs are always consuming 100% of the resources that are assigned to them.
iscreen.memory should be set to around 1.5 times the amound of
# iScreen iscreen.amount.cores=4 iscreen.memory=6
These values should only be changed in conjuction with
Similar to iscreen, it is also possible to configure the resource usage of idbgen sub-jobs, which is used for conformer generation.
# idbgen idbgen.memory=4 idbgen.memory.slaves=3
These values should only be changed in conjuction with
The iserver application is capable of splitting screening experiments and conformer generation jobs into multiple smaller sub-jobs. The following setting specifies how many compounds should be screened in a single virtual screening sub-job.
# Job Splitting job.splitting.max.chunk.size= 2000000
The following figure shows the exploitation of multiple distributed-memory nodes via job splitting. If the database is screened using two separate sub-jobs, it is possible to exploit the full HPC cluster:
This setting can be overriden by the user when sending a remote screening request from LigandScout or KNIME. By default, splitting is only done per database chunk.
Similar to virtual screening, job splitting is also done for conformer generation. The following setting specifies how many input compounds should be used for each idbgen sub-job.
job.splitting.max.chunk.size.confgen = 10000
This setting can be overriden by the user when sending a remote conformer generation request from LigandScout or KNIME.
This setting allows to enable automatic merging of output database chunks after a remote conformer generation job has finished. Per default, each sub-job creates one database chunk. Those are not merged, but only placed into a common folder.
If this parameter is set to
true, the files belonging to specific virtual screening or conformer generation jobs are moved to a user defined location (see below), after the job is completed.
For conformer generation, this does not concern the output screening databases, but only the log and input files. The location of the output databases is given with
job.confgen.databases.directory. However, in the case of virtual screening, this also concerns the output files (i.e. hit lists).
move_finished_jobs is set to
true, the below setting specifies the location to which the files associated to jobs are moved.
# Directory to which finished and cancelled jobs will be moved if # move_finished_jobs is set to true. # The <user> placeholder will be replaced with the name of the # user who started the job. Note that the server application has to be able to # write to the respective directorie(s). finished_jobs_directory=/home/<user>/jobs
It is possible to specify the directory that iserver will use for storing log files. This is especially important if iserver does not have write access to its own installation directory.
The remaining logging properties are mainly relevant for developers who wish to see more detailed logging output.
# DEBUG setting will produce A LOT more logging output, # interesting only for developers logging.level.root=INFO logging.level.org.springframework.web: WARN logging.level.com.tupilabs.pbs: WARN logging.level.ilib.server.grid.slurm: WARN
These settings allow to specify which relational database should be used for storing the metadata associated to virtual screening and conformer generation jobs. By default, an embedded H2 database is used, which requires no further configuration. The iserver application has also been tested extensively with MySQL, but should work with any relational database management system.
spring.datasource.url=jdbc:h2:./database/ilib-server;AUTO_SERVER=TRUE;MVCC=true spring.datasource.driverClassName=org.h2.Driver spring.datasource.username=admin spring.datasource.password=password spring.jpa.generate-ddl=true spring.jpa.show-sql=false spring.jpa.hibernate.ddl-auto=create-update spring.jpa.database-platform=org.hibernate.dialect.H2Dialect spring.jpa.hibernate.use-new-id-generator-mappings=true spring.h2.console.enabled=true spring.h2.console.path=/console/
The first set of Tomcat properties specify how the embedded web server should log the incoming requests. The default settings should be fine for most users.
server.tomcat.basedir=./tomcat/tomcat-logs server.tomcat.accesslog.enabled=true server.tomcat.accesslog.pattern=%t %a "%r" %s (%D ms)
The following properties specify the maximum file size that can be uploaded to iserver.
The default values should be increased if large input files for conformer generation have to be uploaded. In case existing screening databases (
.ldb) should be uploaded, the values might have to be increased substantually (e.g. 32768MB). Please make sure that the server host provides enough disk space if large uploads are intended.
The following setting specifies which port iserver listens on for new job requests. Please note that it is not a problem if the firewall blocks access to this port. All traffic uses the SSH port, as illustrated in the figure below.
# Port on which the server application will listen for requests. Default: 8080 server.port = 8080
The iserver application comes with an embedded default configuration. This means, that even if you delete settings keys in the
application.properties file within the iserver installation folder, the default configuration still exists as a fallback.
Below, you can see the complete default configuration:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110