Home Tech How To Optimize A Data Guard Configuration – The Geek Diary

How To Optimize A Data Guard Configuration – The Geek Diary

Source: unsplash.com

Data Guard is a method used by most administrators to protect their critical data. Guarding data in this manner is essential for organizations that store large amounts of data and need to ensure that that data is protected against corruption, accidental deletion, system failure, or intentional tampering. Most organizations have a few servers that store their critical data, such as a database server, a web server, and a file server. These servers are called the “primary” servers, and the remainder of the servers are called the “secondary” servers.

If you’re a DBA (Data base administrator), you’ve probably heard of Data Guard. Data Guard is a process that replicates the DBs (Data Base systems) between two servers, and you have probably seen your DBA implement a Data Guard configuration.

Data protection is a critical element of any IT infrastructure, especially for organizations that store sensitive information. For organizations that are part of an active data retention policy, data backup and recovery needs to be ensuring that your data is protected and always available, regardless of the state of your IT environment.

In this article, we look at how to monitor the performance of the Data Guard configuration and how to optimize SQL request and redundancy transport for best performance.

Monitor configuration performance with enterprise manager cloud control

Source: pexels.com

Graphical diagrams on the Performance Summary page :

  • Number of repeat transactions : Specifies the redo generation rate (in KB per second) on the primary database.
  • Apply the rate : Displays the application speed (in KB per second) on the backup database. The Application Speed statistic, if active, shows the actual application speed, averaged over the last three log files.
  • Waiting time : Indicates a delay in transportation and application. Transport latency is the estimated number of repeated operations (in seconds) that are not yet available on the backup database. The application delay is the approximate number of seconds by which the backup database lags behind the master database.

On the performance monitoring page, you can call a test application to create a workload for the underlying database. Allows you to view performance metrics when the underlying database is loaded.

Optimising efficient transport services

Optimize the transport of search records using the following methods:

  • Optimization of asynchronous transfer of repeated records with multiple archiving processes
  • Copy data compression

For more information on these methods, see the following sections.

ReopenSecs database property definition

Source: toptal.com

If you specify a value for the database property ReopenSecs, this value applies to the REOPEN attribute of the LOG_ARCHIVE_DEST_n initialization parameter of the sending instance. The sending instance can be an instance of the main database or a Far Sync instance. The REOPEN attribute of the LOG_ARCHIVE_DEST_n parameter specifies the minimum number of seconds before the process sending the redo must try again to access a previously failed target. REOPEN applies to all errors, not just lockouts. These errors include network failures, disk errors, and quota exceptions.

Summary

  • This property sets the minimum number of seconds before a check-in process attempts to access a previously failed target.
  • Standard broker: 300
  • Setting for a backup database or Far Sync instance.
  • The parameter applies to the REOPEN attribute of the LOG_ARCHIVE_DEST_n initialization parameter of the sender instance .

DGMGRL> EDIT DATABASE ‘london’ SET PROPERTY ‘ReopenSecs’=600 ;

Set the NetTimeout property of the database

If you specify a value for the NetTimeout database property, it is copied to the NET_TIMEOUT attribute of the LOG_ARCHIVE_DEST_n initialization parameter of your primary database or Far Sync. You can use the NET_TIMEOUT attribute to override the default network timeout interval set for the system hosting the primary database. Without the NET_TIMEOUT attribute, the primary database may be blocked during the default network timeout. By specifying a smaller (non-zero) value for NET_TIMEOUT, you can allow the primary database to mark the target as failed after a user-defined time interval.

Summary

  • This property specifies the number of seconds the logging process (LGWR) waits for Oracle Net Services to respond to the request.
  • Standard broker: 30
  • This parameter applies to the NET_TIMEOUT attribute of the initialization parameter LOG_ARCHIVE_DEST_n of the sending instance.

DGMGRL> DATABASE ‘london’ EDIT SET PROPERTY ‘NetTimeout’=20 ; Message: Remember to specify an appropriate value when operating in maximum protection mode. Faulty detection of a network failure can cause the primary instance to shut down if there are no other redundant databases in the correct mode with which the primary database instance can communicate.

Optimisation of transmission with MaxConnections

The redo transport mechanism uses all available bandwidth, so a large redo log file can be transferred in parallel by several archiving processes. This behavior is controlled by the MaxConnections property of the database. This architecture increases the speed of redo data and allows faster transfer of redo data to backup databases for batch updates to the main database. Improving the data transfer rate increases the availability of data at the backup database location.

Set the MaxConnections property of the database

Source: unsplash.com

If you specify a value for the MaxConnections property of the database, this value is passed to the MAX_CONNECTIONS attribute of the LOG_ARCHIVE_DEST_n initialization parameter for your primary database or Far Sync instance. The MAX_CONNECTIONS attribute of the LOG_ARCHIVE_DEST_n parameter determines the number of simultaneous connections by which redo log files are transferred to a remote target. The MAX_CONNECTIONS attribute defaults to 1, indicating that a connection has been established for communication and data transfer. The maximum value of MAX_CONNECTIONS is 20. DGMGRL> DATABASE ‘london’ EDIT SET PROPERTY ‘MaxConnections’= 15 ; Message: You must set the initialization parameter LOG_ARCHIVE_MAX_PROCESSES greater than or equal to the value of MAX_CONNECTIONS to obtain the desired number of parallel connections. If the value of the MAX_CONNECTIONS attribute is greater than the value of LOG_ARCHIVE_MAX_PROCESSES, Data Guard uses the available archiving processes.

Compress the playback data by setting the RedoCompression property

If the communication network to the remote databases is a high latency, low bandwidth WAN link, and the replay data sent to the backup databases is important, you must use the network bandwidth as efficiently as possible. Boost transport compression can be enabled at any remote destination for all boost transport methods to reduce network bandwidth usage. Recompression can be enabled or disabled by setting the RedoCompression property of the Oracle Data Guard Broker. The parameter applies to the COMPRESSION attribute of the LOG_ARCHIVE_DEST_n initialization parameter. By default, recompression is disabled. When you add a database to the Data Guard configuration, the Data Guard broker automatically determines whether network compression is enabled or disabled for the added backup database. The property is defined accordingly. Message: The Oracle Advanced Compression option is required to use this feature.

Summary

– This function can be enabled for all re-recording transport methods (ASYNC and SYNC). – This parameter applies to the COMPRESSION attribute of the LOG ARCHIVE DEST _ __n initialization parameter. – Determine whether redo compression is enabled by querying the COMPRESSION column in the V$ARCHIVE_DEST view. – Release with DGMGRL : DGMGRL> DATABASE ‘london’ EDIT SET PROPERTY ‘RedoCompression’=’ENABLE’ ; – Activate with SQL : SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_3 = ‘SERVICE=london SYNC COMPRESSION=ENABLE’ ;

Deferred request for repetition

You can delay applying changes to backup databases to protect against usage errors and corruption. You can protect the backup database from damaged or incorrect data. The application process also double checks the log entries to prevent the application from corrupting the log. If z. B. A critical table was accidentally deleted from the primary database. You can prevent this action from affecting the backup database by delaying the application of changes to the backup database. In Maximum Protection or Maximum Availability mode, Data Guard ensures that no data is lost even if the application is running late. If you set a delay for a purpose for which the real-time application is enabled, the delay is ignored. Message: You can use the Flashback database as an alternative to the Apply Delay configuration option. The use of the Flashback database is a best practice of Oracle.

Set the DelayMins property of the database to delay the application of the reminder

Use the database’s configurable DelayMins property to specify the number of minutes that logging application services must wait before redo data is applied to the backup database. This parameter applies to the DELAY attribute of the initialization parameter LOG_ARCHIVE_DEST_n. DGMGRL> EDIT DATABASE ‘london’ SET PROPERTY ‘DelayMins’=5 ; Default broker: 0 (means the application services apply the replication data as soon as possible)

Use enterprise manager to delay a retry

  1. On the Data Guard page, select the backup database and click Edit.
  2. On the Edit Idle Database Properties page, click Idle Role Properties.
  3. In the Application time field, enter the value of the delay time (in minutes).
  4. Click on Apply.

SQL application optimization

Source: elegantthemes.com

The MAX_SERVERS, APPLY_SERVERS, and PREPARE_SERVERS parameters can be modified to control the number of processes assigned to SQL Apply. Since SQL Apply assigns a process to each of the READER, BUILDER, and ANALYZER roles, the following relationship between the three parameters is required: APPLY_SERVERS + PREPARE_SERVERS = MAX_SERVERS – 3 Here,

  • ACCEPT_SERVERS : Number of APPLY processes used to apply changes
  • MAX_SERVERS : Number of processes using SQL application to read and apply redos
  • PREPARE_SERVERS : Number of PREPARE processes used to prepare for change

Use the DBMS_LOGSTDBY.APPLY_SET procedure to modify the APPLY_SERVERS, MAX_SERVERS, and PREPARE_SERVERS parameters. Request DBA_LOGSTDBY_PARAMETERS to display the SQL application settings. Message: For detailed information about the DBMS_LOGSTDBY.APPLY_SET procedure, see the Oracle Database PL/SQL Packages and Types Reference.

Setting the number of processes APPLY

Before changing the number of APPLY processes, perform the following steps to determine whether adjusting the number of APPLY processes will increase throughput: 1. Determine if the APPLY processes are busy by running the following query: SELECT COUNT(*) AS IDLE_APPLIER FROM V$LOGSTDBY_PROCESS WHERE TYPE = ‘APPLIER’ and status_code = 16166 ; 2. After checking that there are no unoccupied APPLIER processes, determine if there is enough work for additional APPLIER processes by running the following query: SELECT NAME, $ VALUE FROM V$LOGSTDBY STATS WHERE NAME LIKE ‘Transactions%’ ; The second query returns two statistics that together form an ascending total: the number of transactions ready to be applied by the APPLY processes, and the number of transactions already applied. If the difference between completed and created transactions is more than twice the number of APPLY processes available, you can improve throughput by increasing the number of APPLY processes. Before expanding the number of APPLY processes, consider the following requirement: APPLY_SERVERS + PREPARE_SERVERS = MAX_SERVERS – 3

Setting the number of PREPARE processes

In rare cases, it may be necessary to change the number of PREPARATE processes. Before increasing the number of PREPARE processes, ensure that the following conditions are met:

  • All PREPARE processes are underway.
  • The number of transactions ready to be applied is less than the number of APPLY processes available.
  • There are APPLY processes that are not being used.

Run the following queries to verify the above conditions: 1. Determine whether all PREPARE processes are running: SELECT COUNT(*) AS IDLE_PREPARER FROM V$LOGSTDBY_PROCESS WHERE TYPE = ‘PREPARER’ and status_code = 16166 ; 2. Determine if the number of transactions ready for the application is less than the number of APPLY processes: SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME LIKE ‘transactions%’; SELECT COUNT(*) AS APPLIER_COUNT FROM V$LOGSTDBY_PROCESS WHERE TYPE = ‘APPLIER’ ; 3. Determine if there are any unused APPLY processes: SELECT COUNT(*) AS IDLE_APPLIER FROM V$LOGSTDBY PROCESS WHERE TYPE = ‘APPLIER’ and status_code = 16166 ; Before increasing the number of PREPARE processes, consider the requirements: APPLY_SERVERS + PREPARE_SERVERS = MAX_SERVERS – 3 It may be necessary to increase the value MAX_SERVERS before increasing the value PREPARE_SERVERS. Use the DBMS_LOGSTDBY.APPLY_SET procedure to increase the values of MAX_SERVERS and PREPARE_SERVERS, as shown in the following example: SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET(‘MAX_SERVERS’, 26); SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET(‘PREPARE_SERVERS’, 3)