Scalasca  (Scalasca 2.5, revision 18206)
Scalable Performance Analysis of Large-Scale Applications
scan – Scalasca measurement collection and analysis nexus

SYNOPSIS

scan [OPTIONS] [LAUNCHER [LAUNCHER_ARGS]] TARGET [TARGET_ARGS]

DESCRIPTION

scan, the Scalasca measurement collection and analysis nexus, manages the configuration and processing of performance experiments with an executable TARGET. TARGET needs to be instrumented beforehand using the Score-P instrumentation and measurement system. In particular, scan integrates the following steps:

Many different experiments can typically be performed with a single instrumented executable without needing to re-instrument, by using different measurement and analysis configurations. The default runtime summarization mode directly produces an analysis report for examination, whereas event trace collection and analysis are automatically done in two steps to produce a profile augmented with additional metrics.

Serial and multi-threaded programs are typically launched directly, whereas MPI and hybrid MPI+X programs usually require a special LAUNCHER command such as mpiexec, which may need additional arguments LAUNCHER_ARGS (e.g., to specify the number of processes to be created). scan automatically recognizes many MPI launchers, but if not, the MPI launcher name can be specified using the environment variable SCAN_MPI_LAUNCHER (see ENVIRONMENT).

scan examines the executable TARGET to determine whether Score-P instrumentation is present; otherwise the measurement is aborted. The number of MPI processes and OpenMP threads are determined from LAUNCHER_ARGS and/or the environment. If the target executable is not specified as one of the launcher arguments, it is expected to be the immediately following part of the command line. It may be necessary to use a double-dash specification (--) to explicitly separate the target from the preceding launcher specification. If there is an imposter executable or script, e.g., often used to specify placement/thread binding, that precedes the instrumented TARGET, it may be necessary to explicitly identify the target with the environment variable SCAN_TARGET (see ENVIRONMENT).

A unique directory is created for each measurement experiment, which must not already exist when measurement starts unless SCAN_OVERWRITE is enabled (see ENVIRONMENT); otherwise measurement is aborted. A default name for each measurement archive directory is created from a 'scorep_' prefix, the name of the executable TARGET, the run configuration (e.g., number of processes specified), and the measurement configuration. This default name can be overwritten using the SCOREP_EXPERIMENT_DIRECTORY environment variable (see ENVIRONMENT) or the -e command-line option.

When measurement has completed, the measurement archive directory contains all artifacts produced by the measurement and subsequent trace analysis (if configured). In particular, the following files are produced independent from the selected measurement mode:

In runtime summarization mode, the archive directory additionally contains:

In trace collection and analysis mode, the following additional files are generated:

In multi-run mode, the results of the individual runs are stored in subdirectories inside the top-level measurement archive directory. In addition, the following file will be archived:

OPTIONS

The scan command accepts the following command-line options. Note that configuration settings specified on the command line take precedence over those specified via environment variables (see ENVIRONMENT). Also, see MULTI-RUN EXPERIMENTS below for details on interactions with configuration file settings.

-h

Print a brief usage summary, then exit.

-v

Increase verbosity.

-n

Print the command(s) to be launched, but do not execute them.

-q

Quiescent execution with neither summarization nor tracing enabled. Sets both SCOREP_ENABLE_PROFILING and SCOREP_ENABLE_TRACING to 'false'.

-s

Enable runtime summarization mode. Sets SCOREP_ENABLE_PROFILING to 'true'. This is the default.

-t

Enable trace collection and analysis. Sets SCOREP_ENABLE_TRACING to 'true'.

-a

Skip measurement step to (re-)analyze an existing trace.

-e experiment_dir

Override default experiment archive name to generate and/or analyze experiment_dir. Sets SCOREP_EXPERIMENT_DIRECTORY.

-f filter_file

Use the measurement filter rules from filter_file. Sets SCOREP_FILTERING_FILE.

-l lock_file

Block start of measurement while lock_file exists.

-R num_runs

Specifies the number measurement runs per configuration (default=1).

-M config_file
Allows to specify a configuration file describing multi-run experiment settings. See MULTI-RUN EXPERIMENTS below for details.

ENVIRONMENT

Environment variables with the 'SCAN_' prefix may be used to configure the scan nexus itself (which is a serial workflow manager process), rather than the instrumented application process(es) which will be measured, which can also be configured via environment variables. Configuration specified on the nexus command-line takes precedence over that specified via environment variables. See MULTI-RUN EXPERIMENTS below for details on interactions with configuration file settings.

Environment variables controlling scan

SCAN_ANALYZE_OPTS

Specifies trace analyzer options (default: none). For details on the supported options, see scout(1).

SCAN_CLEAN

If enabled, removes event trace data after successful trace analysis (default: 'false').

SCAN_MPI_LAUNCHER

Specifies a non-standard MPI launcher name.

SCAN_MPI_RANKS

Specifies the number of MPI processes, for example in an MPMD use case or if the number of ranks is not automatically identified correctly. The specified number will also be used in the automatically generated experiment title. While an experiment title with an incorrect number of processes is harmless (though generally confusing), the correct number is required for automatic parallel trace analysis.

SCAN_OVERWRITE

If enabled, removes an existing experiment archive directory before measurement (default: 'false').

SCAN_SETENV

If environment variables are not automatically forwarded to MPI processes by the launcher, one can specify the syntax that the launcher requires for this as SCAN_SETENV. For example, "-foo" results in passing "-foo key val" to the launcher, while "–foo=" results in "–foo key=val".

SCAN_TARGET

If there is an imposter executable or script, for example, used to specify placement/thread binding, that precedes the instrumented target, it may be necessary to explicitly identify the target executable by setting SCAN_TARGET to the executable name.

SCAN_TRACE_ANALYZER

Specifies an alternative trace analyzer to be used (e.g., scout.mpi or scout.hyb). If 'none' is specified, automatic trace analysis is skipped after measurement.

SCAN_WAIT
Time in seconds to wait for synchronization of a distributed filesystem after measurement completion.

Common Score-P environment variables controlling the measurement

SCOREP_EXPERIMENT_DIRECTORY

Explicit experiment archive title.

SCOREP_ENABLE_PROFILING

Enable or disable runtime summarization.

SCOREP_ENABLE_TRACING

Enable or disable event trace generation.

SCOREP_FILTERING_FILE

Name of run-time measurement filter file.

SCOREP_VERBOSE

Controls the generation of additional (debugging) output from the Score-P measurement system.

SCOREP_TOTAL_MEMORY
Size of per-process memory in bytes reserved for Score-P.

For further details, please refer to the Score-P documentation and/or the output of 'scorep-info config-vars'.

MULTI-RUN EXPERIMENTS

scan also provides means to automate the generation of multiple measurements with varying configuration settings. This workflow can be employed for various analysis objectives, as long as the variations are based on environment variables. Likely candidates are:

  1. Increasing the statistical significance through multiple repetitions of measurements with identical settings.
  2. Spreading multiple hardware-counter measurements over different runs to limit the measurement overhead and/or to overcome hardware limitations (e.g., number of hardware performance counters that can be measured simultaneously).
  3. Performing a series of measurements with varying application settings, like problem size or input data.

Results of such multi-run experiments can be used individually, aggregated manually using various Cube tools, or be passed to the square(1) command for automated report aggregation.

Attention
The degree of non-determinism in an application's runtime behavior will influence the informative value of any aggregated result. Only with sufficient similarity between application runs will the combination of results be useful.

Multi-run experiments are set up using a plain-text configuration file, which is passed to the scan command via the -M command-line option. In this file, the begin of each measurement run configuration is marked by a line starting with a single dash (-) character; the remainder of the line will be ignored. Subsequent lines up to either the next run separator or the end of the file may contain at most one variable setting of the form 'VARIABLE=VALUE'. Optionally, a section with global settings can be specified at the beginning of the config file, introduced by a line starting with two dashes (--); the remainder of this line will again be ignored. A variable defined in the global section will be applied in all subsequent run configurations unless it is overwritten by a run-specific setting. The configuration file format also allows for single-line comments starting with a hash character (#) and blank lines, both of which are ignored.

For example, the following multi-run configuration file defines a series of four subsequent measurements with different settings:

    # example run configuration file
    # global section
    -- this can also hold comments
    SCOREP_ENABLE_TRACING=true

    -
    # first run with two PAPI metrics
    SCOREP_METRIC_PAPI=PAPI_TOT_CYC,PAPI_TOT_INS

    -
    # second run with different PAPI metric and increased Score-P memory
    SCOREP_METRIC_PAPI=PAPI_LD_INS
    SCOREP_TOTAL_MEMORY=42M

    - third run with different PAPI metric
    SCOREP_METRIC_PAPI=PAPI_VEC_DP

    -
    # fourth run using only global settings

Note that measurement configuration settings are not limited to scan or Score-P environment variables, but also allow for setting arbitrary variables in the measurement execution environment. Also, the order in which measurements are specified may have an impact on the aggregated result, see square(1) for details.

To ensure consistency and reproducibility, the environment must not contain Score-P or Scalasca variables when using a multi-run configuration file. Otherwise, scan will abort with an error providing a list of the offending variables. That is, all Score-P/Scalasca settings to be applied have to be placed in either the global or run-specific sections of the configuration. Moreover, all variables used anywhere in the configuration file will be unset before each measurement run, and then set to either the global or run-specific value if applicable, thus avoiding side effects from variable settings not specified in the configuration file. The Score-P variable SCOREP_EXPERIMENT_DIRECTORY will not have any effect inside the configuration file, as an automatic naming scheme—an extension to the default Scalasca scheme—is enforced to keep the multi-run measurement directories consistent. To set the experiment directory a priori, the scan command-line option -e can be used. Other scan options that control the measurement (-q, -t, and -s) will be ignored when used with a config file and should be set through the respective environment variables in the configuration file for consistency.

In addition to multi-run experiments with different configuration settings, scan supports repeating a single or a set of measurements multiple times via the -R command-line option, for example, to provide increased statistical significance. For measurements without a configuration file, the measurement will be repeated the requested number of times with the current environment. In case of multi-run configurations, each individual run will be repeated the given number of times with the specified configuration.

For multi-run experiments, scan creates a common directory which contains the result of each individual measurement run stored in a subdirectory. The name of the base directory and the experiment directories contains the number of configurations as well as the number of repetitions. To support reproducibility, the configuration used is stored in the file scalasca_run.cfg in the common base directory.

EXIT STATUS

scan exits with status 0 if measurement and automatic trace analysis (if configured) were successful, and greater than 0 if errors occur.

NOTES

While parsing the arguments, unrecognized flags might be reported as ignored, and unrecognized options with required arguments might need to be quoted.

Instrumented applications can still be run without using scan to generate measurements, however, measurement configuration is then exclusively via Score-P environment variables (which must be explicitly exported to MPI processes) and trace analysis is not automatically started after event trace collection.

EXAMPLES

scan mpiexec -n 4 foo args
Execute the instrumented MPI program foo with command-line arguments args, collecting a runtime summary (default). Results in an experiment directory scorep_foo_4_sum.

OMP_NUM_THREADS=3 scan -s mpiexec -n 4 foobar
Execute the instrumented hybrid MPI+OpenMP program foobar, collecting a runtime summary (default, but explicitly requested). Results in an experiment directory scorep_foobar_4x3_sum.

OMP_NUM_THREADS=3 scan -q -t -f filter bar
Execute the instrumented OpenMP program bar, collecting only an event trace with the run-time measurement filter filter applied. Trace collection is immediately followed by Scalasca's automatic trace analysis. Results in an experiment directory scorep_bar_Ox3_trace.

SEE ALSO

scalasca(1), square(1), scout(1)



Scalasca    Copyright © 1998–2019 Forschungszentrum Jülich GmbH, Jülich Supercomputing Centre
Copyright © 2009–2015 German Research School for Simulation Sciences GmbH, Laboratory for Parallel Programming