This is the first of a series of posts that will show you how to set up one way replication for one Oracle Database to another. It is assumed that Golden Gate is installed and pre-requisites such as the Golden Gate user has been created.
The first required step for Golden Gate Basic Replication is setting up and starting the Extract process. Remember that you should start the Extract first to begin capturing changes made while the initial data load is running. If you can afford to take down the source database while the initial load is running, then you can start the Extract after the initial load.
Verifying Database-Level Supplemental Logging
When replicating from Oracle databases, you need to enable database-level supplemental logging on the source database. The supplemental logging is required
to ensure that Oracle adds additional information to the redo logs that is required by GoldenGate.
First, you can verify whether Oracle database supplemental logging is already enabled by running the following SQLPLUS command:
SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
You can enable Oracle database-level supplemental logging by running the following SQLPLUS command as a user with the ALTER SYSTEM privilege:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Enabling Table-Level Supplemental Logging
GoldenGate requires the key values in addition to the changed data in order for the GoldenGate Replicat to apply the changed data to the target database. Adding supplemental logging to the source database tables ensures that the database logs the key values that GoldenGate needs in order to properly apply the updates on
the target database.
You can use the GoldenGate ADD TRANDATA command to force the database to log primary-key columns for all updates on your source database.
Behind the scenes, the ADD TRANDATA command is equivalent to the following command in an Oracle database:
ALTER TABLE VMEDDEV.MEMBERS
ADD SUPPLEMENTAL LOG GROUP GGS_MEMBERS_19387 (MEMBER_ID) ALWAYS;
It can become tiresome entering individual ADD TRANDATA commands so I when I am replicating an entire schema I like to use an OBEY file and generate the statement dynamically from the database, as follows:
set echo off
set verify off
set pagesize 2000
set linesize 250
set trim on
set heading off
set feedback off
SQL> select ‘add trandata VMEDDEV.’ || table_name
2 from dba_tables where owner = ‘VMEDDEV’;
Once the script is generated log into GGSCI on the source database and execute it as follows
GGSCI (zagami.ctn.dot.co.za) 2> dblogin userid ggs_owner, PASSWORD Ggs_owner;
Successfully logged into database.
GGSCI (zagami.ctn.dot.co.za as ggs_owner@orcl) 3> obey diroby/VMEDDEV.add_trandata.obey
It’s good practice that when the ADD TRANDATA commands completes, you should verify in SQLPLUS that the supplemental logging has been successfully enabled. You can do this using the following query:
SQL> select owner, log_group_name, table_name
from dba_log_groups where owner = ‘HR’;
Disabling Triggers and Constraints
You need to disable any database triggers or cascade-delete referential integrity constraints on your target tables. The reason is to prevent duplicate changes, because GoldenGate is already replicating the results of the trigger and cascade-delete operations from the source database. If you didn’t disable the constraints and triggers, GoldenGate would replicate those changes; then the triggers and cascade delete
constraints would also fire, causing duplicate changes on the target database.
Starting with GoldenGate version 11, a new SUPPRESSTRIGGERS option is available as part of the Replicat DBOPTIONS parameter, to automatically suppress the triggers from firing on the target. You can use it to avoid having to manually disable the triggers. This option is available for Oracle 10.2.0.5 databases and later, and for Oracle 22.214.171.124 databases and later.
You can also use a SQL script such as the following Oracle database example to automatically generate your ALTER TABLE enable and disable commands for the cascade-delete constraints and triggers. Using a script to dynamically generate the commands from the database saves time and ensures more accuracy.
set echo off
set verify off
set pagesize 2000
set linesize 250
set trim on
set heading off
set feedback off
select ‘alter table ‘||owner||’.’||table_name||
‘ disable constraint ‘||constraint_name||’;’
WHERE owner = ‘VMEDDEV’;
select ‘alter trigger ‘||owner||’.’||trigger_name|| ‘ disable ;’
where owner = ‘VMEDDEV’;
Verifying the Manager Status
The Manager process manages all of the GoldenGate processes and resources. Before you can start the Extract and Replicat processes, you need to verify that the GoldenGate Manager process is running on the source and target servers.
For the basic replication configuration, the Manager parameter file on the source and target servers needs to contain the port number, as shown in the following example:
GGSCI (zagami.ctn.dot.co.za as ggs_owner@orcl) 348> edit param mgr
— GoldenGate Manager
You can use the INFO MGR command to verify that the Manager is running:
GGSCI (zagami.ctn.dot.co.za as ggs_owner@orcl) 348> info mgr
Manager is running (IP port zagami.ctn.dot.co.za.7809, Process ID 4089).
If the GoldenGate Manager isn’t running, you can start it using the following command:
GGSCI (zagami.ctn.dot.co.za as ggs_owner@orcl) 348> start manager
Configuring the Local Extract
Now that youfve made sure the Manager is running, let’s configure the Local Extract. In order to do this, you first need to create a parameter file for the Extract. In this example we are configuring the local Extract to capture all the SQL DML changes from the VMEDDEV schema.
To edit in GGSCI, you enter EDIT PARAMS extvmed1.
GGSCI brings up the parameter file in the default text editor. The file will be created under the $GG_HOME/dirprm folder.
GGSCI (zagami.ctn.dot.co.za as ggs_owner@orcl) 350> edit params extvmed1
— Local extract for VMEDDEV schema
USERID ggs_owner, PASSWORD xxxx
You can use two dashes (–) at the beginning of a line in the parameter file for comments. The order of the Extract parameters listed is important. For example, the EXTRACT parameter must be the first entry in the parameter file, and the EXTTRAIL parameter must precede any associated TABLE statements.
Trail files are staging files used to store the committed transactions. In the
example, the trail file holds the VMEDDEV transactions extracted from the transaction log by the Local Extract. This file will be read by the Data Pump process which will send the transactions over the network to the remote trail files.
The TABLE parameter is used to specify from which source database tables you wish GoldenGate to extract changes. The TABLE parameter is a complex parameter; it has many options that allow you to filter rows, map columns, transform data, and so on. For now, we’ll keep it simple and just instruct GoldenGate to extract all of the table data changes for the VMEDDEV schema. You can do this easily by using a wildcard character, the asterisk (*), after the schema name.
Adding the Extract
Now that we’ve set up the Extract configuration parameters, the next step is to add the Extract group on the Source Database.
You can do that using the following commands from GGSCI:
GGSCI > ADD EXTRACT EXTVMED1, TRANLOG, BEGIN NOW
GGSCI > ADD EXTTRAIL /u01/app/oracle/product/12.1.2/oggcore_1/dirdat/l1, EXTRACT EXTVMED1, MEGABYTES 100
The first command, ADD EXTRACT, adds the Extract using the configuration parameters defined in the last section. After the Extract is added, it establishes checkpoints in the source trail file and on the database transaction log to keep track of processing. The TRANLOG parameter of the ADD EXTRACT command tells GoldenGate to use the database transaction log as its source. In the Oracle example, the
redo logs are the source. BEGIN NOW tells Extract to begin processing changes from the source database as soon as the Extract is started. Optionally, you can also instruct Extract to begin capturing changes at a specific timestamp or using a specific trail-file number.
The ADD EXTTRAIL command adds the local Extract trail file, assigns it to Extract EXTVMED1, and gives it a size of 100MB. The default size of trail files is 10MB. You should size the trail files sufficiently large enough based on the transaction volume so that GoldenGate isnft creating new trail files too often and slowing performance.
Starting and Stopping the Extract
After adding the Extract, you need to start it to begin capturing changes, as shown in the following example:
GGSCI (sourceserver) > START EXTRACT EXTVMED1
If needed, you can stop the Extract using a similar method. For example, you may need to make a change to the GoldenGate parameters. In this case, you would stop the Extract, make your changes to the parameters, and then start the Extract for the new changes to take effect. Here is an example of how to stop the EXTVMED1 Extract:
GGSCI (sourceserver) > STOP EXTRACT EXTVMED1
Verifying the Extract
When the Extract has started, you can verify that it’s running using the INFO EXTRACT command. You should see a status of RUNNING. If you see a status of STOPPED or ABENDED, there may be a problem. In the following example, youfre checking the status of the EXTVMED1 Extract:
GGSCI (sourceserver) 2> info extract EXTVMED1
EXTRACT EXTVMED1 Last Started 2014-11-10 19:34 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:00 ago)
Process ID 31578
Log Read Checkpoint Oracle Redo Logs
2014-11-10 19:37:00 Seqno 111885, RBA 14273536
SCN 1.3014610643 (7309577939)
If the Extract isn’t running, you can review the GoldenGate error log file and try to resolve the problem. Often, the error is something simple like incorrect spelling. The error file is named ggserr.log and is located in the GoldenGate software installation directory location.
You can see from the INFO command that the local Extract is RUNNING. Checkpoint lag is the time delay between the last checkpoint written to the trail and the time when the record was processed by GoldenGate. There is currently no checkpoint lag. If you had a high checkpoint lag, it could indicate a performance problem or perhaps the local Extract just catching up on a large volume of changes.
You can also see that Oracle redo logs are the source for the Extract, the last read checkpoint time, the transaction log sequence number, and the relative byte address (RBA) of the record in the transaction log.
You can add the DETAIL option to the INFO command to see even more information about the Extract. You may find the detail display helpful to see where the important files for the Extract are located, such as the parameter and report files. The following example uses the INFO command to display details about the EXTVMED1 Local Extracth
GGSCI (sourceserver) 3> info ext EXTVMED1, detail
Finally, you can run the stats command on your Extract. This shows whether the Extract has actually processed any DML changes.
GGSCI (sourceserver) 2> stats ext EXTVMED1
So thats it for step one, next post will be to start the datapump Extract see here.