Safety is an important issue for FLAM. FLAM can help to fulfill many security requirements (e.g. PCIDSS, ISO27001, ...), using encryption (OpenPGP, SSH), hardware security modules (PKCS#11 and IBM CCA/ICSF for key exchange and signing), strong access control and checksums, antivirus scanning and much more, nevertheless the following points must be observed for the secure operation of FLAM.
In the install.txt
for your platform under prerequisites you will find
the software on which FLAM is based or depends. If nothing is listed, then
only the operating system with the language environment and the
corresponding libc is affected. In the about.txt
you can see all the
external libraries and their versions that FLAM integrates. Both pieces
of information can be used to determine whether the use of FLAM might be
affected by an emerging security vulnerability.
With the call flcl version
a version of each software component is
written. The installation of FLAM will be coherent if all build numbers
are the same. The build number is the last number of the version behind
the minus character. Especially for dynamic linked parts of the software
it will be important to check if the installation was well done. We
recommend establishing an installation verification process which runs
the built-in function VERSION
and verifies that the build numbers are
all the same.
limes support code signing for modules and installation packages if it is possible on a platform. If not limes provides secure checksums to allow verification of the packages after download.
There are a lot of environment variables or system symbols which are security related. Below you can find some important examples (please note: this list is incomplete):
See Used Environment Variables and verify these settings. To get a list of the current environment please use the command below:
flcl info get.system
For operating systems implementing a secure access facility (e.G. z/OS)
FLAM supports a lot of policies and additional resources for better
control about the usage of FLAM. For more information about this see
SAF Consideration and the install.txt
.
Unfortunately, there is no such additional protection of application-specific resources for other operating systems.
With the SAF support it is possible to ensure minimal key, password or hash length. Critical operations like input logging, packet capturing, antivirus scanning etc. can be deactivated and more.
There are a lot of policies and resources defined for the supported cryptographic features (PGP, EDC, SSH), which will not be explained below in more details.
Credentials (Passwords, PINs, Keys, ...) entered using the CLP strings
are flagged as protected and will not be printed in clear form. Such
values are replaced by **** SECRET ****
unless the input logging
is activated. In this case the complete command string is written to the
log and might contain critical values. After parsing the command line
FLAM knows the critical values and is replacing them.
Additionally it is better to provide such sensitive input within parameter files. The file can be better protected. But the best solution in such cases is to use a hardware security module (HSMs).
FLAM supports hardware security modules (HSMs) over a service provider interface for session key procedures and signing (PGP, FLAMFILE, ...). Currently, the use of PKCS#11 and IBM CCA (including ICSF) are supported. We recommend using the HSM support instead of clear passwords for key stores or such critical things. The misuse of the service provider interface can be prevented completely with SAF support.
All software cryptographic operations are done using a central crypto kernel (CryCore). This singleton determines the optimal implementation depending on the computer used. It supports hardware acceleration if possible. But before the concrete algorithm is used the first time a self test against the NIST test vectors is done, to ensure that the loaded functions do implement the algorithm correctly. For IBM mainframes mainly the CPACF functions are used to implement the cryptographic algorithms. For all other platforms the library libcrypto of the OpenSSL project is used.
To generate keys (mainly for SSH) it is important to have a high entropy.
By default, FLAM ensure a high entropy which could result in an
error, if only weak randomness is available. The environment variable
FL_ALLOW_LOW_ENTROPY
allows low entropy to prevent such errors
and the policy LOW.ENTROPY
can be used to prevent this. In some cases
the quality of randomness is not so important, in such cases it could
be useful to allow low entropy and in another context it will be
dangerous, and then it will be better to prevent this.
FLAM supports secure delete of sensitive data (keys, passwords, pins). This means that erasing the memory is done as early as possible and the overwriting of the memory will not be eliminated by the optimizer for the release code.
FLAM supports antivirus scanning over a service provider interface. It will be required by certain security standards to scan the clear content of a file before it is encrypted and stored on a remote system. In such a case the antivirus scanner can be used. Our standard implementation uses clamAV, but the interface can be implemented against each kind of antivirus scanner. Simultanously, this interface is a backdoor to intercept clear data and should be closed for all other cases. FLAM implements a complete set of SAF control mechanism which can be used to ensure this.
The input logging will write the complete input parameter string to the log (this is not done by default and only important for debugging in a few cases). In this case the content is not known and might contain critical data or credentials.
The SAF policy LOG.INPUT
can be used to prevent the activation of input
parameter logging.
Packet capturing is useful to find errors for remote connections (e.g. SSH) but packets might contain critical or sensitive data.
With the SAF policy SSH.PCAP.ALLOWED
you can control the usage of the
PCAP feature and prevent this kind of backdoor for normal processing.
The command pre- and post-processing is a powerful feature of FLAM and can be used in once or per file, locally or remote at read or write.
The feature was implemented mainly for process automation but to run commands (often executable code) on remote systems could be dangerous.
To prevent misuse FLAM has implemented a set of SAF controls.
The FLAM started task (FLMSTC) on z/OS is an active component running in the background providing services to implement the subsystems, including the LE-less interface for the FLUC record interface. The FLAMSTC provides the scheduling of SRBs in another enclave for the zIIP support and the possibility to run authorized functions. This is realized with three different PC routines. The FLAMSTC does not implement own supervisor calls (SVCs). Functions of the FLAM and FLUC subsystems are called as part I/O SVCs.
The QA process is using zACS to penetrate the PC routine and the corresponding recovery routines. The only findings of the zACS scan are regular program terminations of the FLAMSTC with 0C4.
The error trace is an important feature of FLAM which help a lot to analyze errors. For a good cause determination the data involved in the error is often written to the error trace. This means that the error trace could include parts of sensitive data items. In a case of an error this trace might need protection. It is similar to a dump processing in an error situation. Such a dump could contain critical data and must be protected.
limes is using vulnerability scanners and static code analysis for conformance with certain Coding Standards to prevent critical mistakes in our programs. Additional CPPCHECK and ScanBuild are used for static code analyzes.
For each platform limes uses the standard memory checker (e.g. valgrind on linux) for dynamic code analysis running over all test cases. But there are a lot of platforms without a tool for runtime analyzes. Therefore, limes has implemented an own resource checker (memory, file handles, stack, ...) in the debug code to detect stack and memory overflows, missing frees(leaks) or unclosed or misused files.
Limes is using Coco(r) to determine the code coverage of FLAM and ensure that each part of the code is tested. Squish(r) is used to automate testing of our GUIs. For automated testing of each ISPF panel and limes command we use the API of the x3270 terminal program.
For all testing limes is using an own test framework which runs all the unit and regression tests on all platforms. The cross-reference tests nsures that data produced by standard tools (e.g. gzip, openssl enc, gpg) is understood on each platform and produced output of FLAM from each platform can be read with the standard tools.
We use continues integration, this means that all the build tests are running after each build, a rebuild must be successful before the code is taken over from the developer repository to the main repository and all the long-running tests (performance, large files, ...) will be started automatically each night and anything will be checked each morning.
This allows us to deliver rolling releases, where a customer can request a new parameter on Wednesday, and we can deploy a new revision or release on Monday. For this each weekend is currently used to clean anything, rebuild anything, tests anything, package anything, install the new package to a clean system, test anything again. And if anything was successful then we can publish the new build for our users. Sometimes it takes long to get all tests to run with success, and we cannot release a new version each week. All regression tests must be successful before we publish a new version of FLAM.
The publishing include an encrypted code escrow for the complete source used to build this release. All build results, all test results anything will be stored in an archive for this publication. So we can review the complete state if a customer has a problem with a certain build including all tests results, all compile and link lists and so on. We can install from this repository easily each published build, to set up the customer environment to reproduce an error or a behavior.
limes is using a mixture of SCRUM and team concert with strict coding guidelines where requirement, design, development, build, test, packaging, installation, testing again and the deployment runs in parallel. Version control is done with GIT and anything is integrated in an eclipse workspace under Arch Linux for our developer. The software development process ensures that all described QA measures are adhered to.
The documentation is generated from the source code and based on that it must be totally in sync with the application and components used. The quality of the description for a parameter or the manual page for an object or overlay could be bad, but any implemented feature is documented and vice versa. We use the FLAMCLEP command string parser and anything what this compiler is accepting is defined in tables and the help message and the manual page must be provided otherwise a table error occurs.
The same method is used for interface specifications (API), where each function and each parameter for each function must be documented in form of comments in the code otherwise the docu generation will fail.
This process ensures that the documentation is always up-to-date and in sync with the implementation and the user can trust that anything is documented.