In-Memory Computing Security

The SAP HANA in-memory database holds the bulk of its data in-memory for maximum performance. However, it still uses persistent storage to provide a fallback in case of failure. Data and undo log information (part of data) are automatically saved to disk at regular savepoints. The log captures all changes as redo logs, which are continuously and synchronously saved to disk after each COMMIT done by database transactions (waiting for end of disk write operation).
After a power failure, the database can be restarted like a disk-based database.
Start of SAP HANA Database
The SAP HANA system restart sequence restores the system to a fully operational state quickly.

Note
After a system restart, by default, not all tables are loaded into the main memory immediately.
The system is normally restarted ("lazy" reloading of tables to keep the restart time short)
The system returns to its last consistent state (by replaying the redo log since the last savepoint)
Restart Process

During a database restart (for example, after a crash), the data from the last completed savepoint can be read from the data volumes. In addition, the redo log entries written since the last savepoint are read from the log volumes.
When restarting, the system is restored from the savepoint versions of the data pages. In this way, all data changes written since the last savepoint, are automatically rolled back. After the savepoint is restored, the log is replayed to restore the most recent committed state.
During startup, the uncommitted changes that were stored as part of the last savepoint also need to be rolled back. To undo these changes, undo information is stored as part of the savepoint. This is done by the undo manager, which provides an interface for writing undo records. The undo information is written to one virtual undo file per transaction. The undo virtual files consist of data pages, which are persisted in the data volume. This means that the undo entries first go through the page buffer and are persisted no later than the next savepoint.
Startup Process: Persistence Layer Activities
Note
The time needed for a restart of a complete SAP HANA system or an individual tenant database highly depends on the quantity and type of store (row store and columnar store), the way the system or tenant database was stopped, and further influencing factors (see Important Factors for the Startup Performance in the next section).Note
A regular shutdown writes a savepoint, so there will be no replay log entries to be processed after a regular shutdown.
While the row store is always loaded entirely, only those columns of essential column tables are loaded into memory. The other columns are loaded, if requested.
For example, if a query only uses some of the fields (columns) of a table, only those fields are loaded into the memory at the time of query execution. All row-based tables (usually system tables) are available in the main memory. Their size significantly influences the time required to start the database. Other factors that influence start-up time are mentioned in the figure, SAP HANA Startup Process.
Startup Process
Important Factors for the Startup Performance
Multiple factors influence the time that amounts when starting an SAP HANA system with integrated in-memory databases. The following list should help to identify the most important factors:
Amount of logs to be replayed (roll forward)
Amount of changes by uncommitted transactions (roll back)
Read performance of the log volumes (disks)
Read/write performance of the data volumes (disks)
Layout design: separate log and data disk areas (physically, not only logically)
Startup Process: Tables
During the normal operation, SAP HANA tracks a list of column tables that are currently loaded (once per day). This list is now the basis of loading the necessary tables into main memory during restart. Reloading column tables in this way restores the database to a fully operational state more quickly. However, it does create performance overhead, and may not be necessary in non-productive systems. You can deactivate the reload feature in the indexserver.ini file by setting the reload_tables parameter in the sql section to false. In addition, you can configure the number of tables whose attributes are loaded in parallel using the tables_preloaded_in_parallel parameter in the parallel section of indexserver.ini.
Note
You can mark individual columns as well as entire column tables for preload.
When the Preload checkbox is selected, tables are loaded into memory automatically after an index server start. The current status of the Preload checkbox is visible in the system table TABLES in the PRELOAD column. Possible values are FULL, PARTIALLY, and NO. Also, in the system table TABLE_COLUMNS in the PRELOAD column, the possible values are TRUE and FALSE.
Note
When fields of large column tables are not in the main memory, the first access to the table might be significantly slower. This is because all requested columns are loaded to the main memory before the query can be executed. This applies even if a single record is selected.
- Show information about the list of tables to be preloaded (not the preload flag related columns): Code Snippet1hdbcons "tablepreload i"
- Show full content (list of tables): Code Snippet1hdbcons "tablepreload c -f"
- Write additional preload information to a virtual file in data volumes: Code Snippet1hdbcons "tablepreload w -s"
- Show a full list of help: Code Snippet1hdbcons help[Enter]
Show hdbcons help on specific command:
Code Snippet1hdbcons "help tablepreload"
Caution
Simply selecting all tables for preload in order to accelerate initial queries can slow down start-up time considerably. The Preload checkbox is a tuning option and should be used carefully, depending on the individual scenario and requirements.