Farewell DB Control – Welcome DB Express

Tags

, ,

One feature many DBAs are going to miss in Oracle12c is the DB Control that was used as an alternative to Enterprise Manager for a local instance.

As a replacement, 12c introduces DB Express which offers significantly less features than DB Control. Unlike the DB Control, DB Express runs from inside the database using the XML DB infrastructure, so there are no additional parts to install or executables to start, it has no mid-tier or middleware components, ensuring that its overhead on the database server is negligible.

DB Express supports key performance management and basic database administration functions for non-CDBs, CDBs, PDBs, or Oracle RAC database instances.

DB Express Features

There are basically four menu items namely Configuration, Storage, Security and Performance

Configuration:
Initialization parameters (init.ora) management
Memory management
Database Feature Usage
Database Properties

Storage:
Tablespace management
Undo management
Redo management
Archive log management
Control files management

Performance Hub, which includes these features:
Real-time performance monitoring and tuning
Historical performance and tuning
SQL monitoring (real-time and historical)
Database operations monitoring
ADDM, including Real-Time ADDM
Active Session History (ASH) Analytics
Automatic and manual SQL Tuning Advisor

Security
Users
Roles

Starting DB Express

You can use EM Express to manage a non-CDB, a CDB, and all the PDBs in the CDB except for the seed PDB.

For each non-CDB, CDB, or PDB that you want to manage using EM Express on a given host, a unique HTTPS port must be configured. A different port must be configured for each container in a CDB that you want to manage using EM Express.

The HTTPS port is provided by DBCA when it configures your non-CDB, CDB, or PDB.

To find the port number for the DB Express use the following statement for a Non-CDB

SQL> SELECT DBMS_XDB_CONFIG.gethttpsport FROM dual;

To find the port number for a CBD use the following SQL command

alter session set container=CDB$ROOT;
select dbms_xdb_config.gethttpsport() from dual;

To find the port number for a PDB use the following SQL command.

Note that the PDB must be open for the statement to succeed. If the database is only mounted the statement will fail.

alter session set container=PDB1;
select dbms_xdb_config.gethttpsport() from dual;

If a port number is not returned by any of the above statements, then you must manually configure an HTTPS port using DBMS_XDB_CONFIG.SETHTTPSPORT(number)

SQL> alter session set container=PDB1;
SQL> exec DBMS_XDB_CONFIG.SETHTTPSPORT(5502);

Before you configure the port, ensure that your listener has been started and that a dispatcher has been configured with an XDB service.

Note that you will need SYSDBA or have the XDBADMIN role to execute the above mentioned queries.

Once you know what your port number is, then start DB Express using the following URL

https://<hostname>.<domain>:<port>/em/

Example:

https://prodhost.mydomain:5500/em/

If the database is open and the HOSTNAME and PORT are correct you will reach the login screen.

Troubleshooting

If you do not reach the login page, test to ensure that the port is registered with the listener. The following example checks for port 5500:

lsnrctl status | grep -i 5500

Usually the inability to connect is due to incorrect server name resolution, that is the listener can not find, or is bound to the incorrect hostname. The easiest way to check this is to stop the listener via “lsnrctl stop”, temporarily rename the listener.ora file, then start the listener without a listener.ora file in place via “lsnrctl start”. Wait a few minutes then check the “listener status” to see how it recognizes your Fully Qualified Domain Name (FQDN). If the hostname reference is different than expected, alter the content of you host entry in the listener.ora file.

Another possible cause for failure is the dispatcher parameter not being configured. Oracle Enterprise Manager Database Express is web-based, so it needs a web server. Oracle XML Database (Oracle XML DB) functionality in Oracle Database 12c provides a lightweight built-in web server. To use the Oracle XML DB web server for Oracle Enterprise Manager Database Express, make sure the DISPATCHERS parameter of the database instance includes the entry for the Oracle XML DB service:

SQL> show parameter dispatchers

NAME         TYPE    VALUE
———————————  ——————  —————————————————
Dispatchers  string  (PROTOCOL=TCP)
                     (SERVICE=ANN1XDB)

If the DISPATCHERS parameter is not set, include the following line in the parameter file then restart the database

dispatchers="(PROTOCOL=TCP)(SERVICE=<sid>XDB)"

For a PDP the database must be open before you can query the port number

Log in to DB Express

Log in to the database or PDB with a user account that is authorized to access EM Express. The EM_EXPRESS_BASIC and EM_EXPRESS_ALL roles are created for EM Express, and a user who has been granted at least one of these roles can log in to EM Express.

This user initially could be SYS or SYSTEM, with the password that you specified during database installation.

Although the SYSTEM account can be used to perform day-to-day administrative tasks, Oracle strongly recommends creating a named user account for administering the Oracle database to enable monitoring of database activity.

To find out more about EM Express please go see the docs

To test your knowledge on this subject please visit Mock-examinations.com

Quick and Dirty Count of Distinct Values in a Column

Tags

, , ,

In previous database versions, if we wanted to perform a count of distinct values, we would have used SELECT COUNT (DISTINCT expr) FROM table;

This often resulted in a full table scan which performed poorly.

Starting with Oracle Database 12c Release 1 (12.1.0.2) we can  now use an alternative function APPROX_COUNT_DISTINCT. This returns the approximate number of rows that contain distinct values of expr.

This function provides an alternative to the COUNT (DISTINCT expr) function, which returns the exact number of rows that contain distinct values of expr. APPROX_COUNT_DISTINCT processes large amounts of data significantly faster than COUNT, with negligible deviation from the exact result.

For expr, you can specify a column of any scalar data type other than BFILE, BLOB, CLOB, LONG, LONG RAW, or NCLOB.

APPROX_COUNT_DISTINCT ignores rows that contain a null value for expr. This function returns a NUMBER.

Examples

The following statement returns the approximate number of rows with distinct values for manager_id:

SELECT APPROX_COUNT_DISTINCT(manager_id) AS "Active Managers"
  FROM employees;

Active Managers
---------------
             18

The following statement returns the approximate number of distinct customers for each product:

SELECT prod_id, APPROX_COUNT_DISTINCT(cust_id) AS "Number of Customers"
  FROM sales
  GROUP BY prod_id
  ORDER BY prod_id;

   PROD_ID Number of Customers
---------- -------------------
        13                2516
        14                2030
        15                2105
        16                2367
        17                2093
        18                2975
        19                2630
        20                3791
. .

To practice your knowledge of this new feature visit mock-examinations.com

Advanced Index Compression

Tags

, ,

This feature is available starting with Oracle Database 12c Release 1 (12.1.0.2).

Creating an index using advanced index compression reduces the size of all supported unique and non-unique indexes. Advanced index compression improves the compression ratios significantly while still providing efficient access to the indexes. Therefore, advanced index compression works well on all supported indexes, including those indexes that are not good candidates for prefix compression.

For a partitioned index, you can specify the compression type on a partition by partition basis. You can also specify advanced index compression on index partitions even when the parent index is not compressed.

Advanced index compression works at the block level to provide the best compression for each block. When a CREATE INDEX DDL statement is executed, a block is filled with rows. When it is full, it is compressed with advanced index compression if enough space is saved to insert the next row. When DML statements or other types of DDL statements are executed, and a block becomes full and is about to be split, the block might be recompressed using advanced index compression to avoid the split if enough space is saved to insert the incoming key.

Before enabling advanced index compression, the database must be at 12.1.0 or higher compatibility level. You enable advanced index compression using theCOMPRESS ADVANCED LOW clause. For example, the following statement enables advanced index compression during the creation of the hr.emp_mndp_ix index:

CREATE INDEX hr.emp_mndp_ix ON hr.employees(manager_id, department_id)
   COMPRESS ADVANCED LOW;

You can also specify the COMPRESS ADVANCED LOW clause during an index rebuild. For example, during rebuild, you can enable advanced index compression for thehr.emp_manager_ix index as follows:

ALTER INDEX hr.emp_manager_ix REBUILD COMPRESS ADVANCED LOW;

The COMPRESSION column in the ALL_INDEXES, ALL_IND_PARTITIONS, and ALL_IND_SUBPARTITIONS views shows whether an index is compressed, and, if it is compressed, the type of compression enabled for the index.

Note:

  • Advanced index compression is not supported for bitmap indexes or index-organized tables.
  • Advanced index compression cannot be specified on a single column unique index.

Please visit mock-examinations.com and test yourself on this topic by subscribing to 1z0-060 New Features.

Becoming Oracle Certified

Tags

, , , , ,

Every Oracle DBA should aspire to becoming certified and remaining so. By obtaining certification you will not only gain a competitive advantage in the IT job market, but while preparing for certification you will broaden your knowledge by being exposed to a wide variety of essential functions, features & tasks.

To find out what is required to become certified visit here

While preparing for your exam you should try get hands on exposure to the features that you are learning about. Nothing beats experience! Once you feel you are ready, you might find it worth while investing in a set of brain dump questions to assess if you are as prepared as you anticipated. There are a number of sites that offer such brain dumps. My personal website mock-examinations.com is one such site, it will give you the opportunity to complete a test with between 100 – 200 questions absolutely free for one day. If you have spent the recommended time studying upfront, one day should be sufficient to determine if you are ready to sit the real exam.

Some professionals feel that using brain dumps is like cheating. I see the logic if it is your only means of studying, so re-iterate that you should get some hands on practice before using the brain dumps. This is especially important if you plan to attain Oracle Certified Master status. The OCM exam is practical and if you rely on brain dumps for your OCP you will be sadly disappointed when you attempt the OCM exam.

I need to upgrade my certification so will be starting a new series of posts related to Oracle 12c Features. The posts will be referenced by the 1z0-060 – Upgrade to Oracle12c exam on mock-examinations.com as I will be compiling questions as I study. You can do the same if you’re up to it, simply join mock-examinations.com, create a topic, and as you study add questions and answers to your topic. You can keep the topic private or choose to make it public. I’ll be making my topic public so if you want to test your knowledge after reading the posts, simply register at mock-examinations.com and subscribe to topics contributed by SHUNT.

Good luck with your studies

Golden Gate Replication – Step 5 – Manual Monitoring

Tags

, ,

Murphy’s law dictates that if something can go wrong, it will, so let’s put some measures in place to ensure we are are aware of the crisis soon rather than later.

This post will highlight a few commands that can be used for manually monitoring your Golden Gate processes, a follow up post will show you how to automate the monitoring and receive alerts when something goes awry.

Monitoring All Running Processes
The info all GGSCI command gives us a good overview of the Golden Gates processes performance. You would want to see the Lag at 0 and the Time Since Chkpt should be less than 10 seconds. If you have a long lag in your Extract, the value in the Lag column is large or does not decrease

GGSCI source server> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING DPVMED1 00:00:00 00:00:07
EXTRACT RUNNING EXTVMED1 00:00:00 00:00:07

Monitoring the Detail Extract
To understand the current status of the Extract, you can add detail to the info GGSIC command as follows:
GGSCI> Info extract EXTVMED1, detail

EXTRACT EXTVMED1 Last Started 2014-11-10 19:34 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:00 ago)
Process ID 31578
Log Read Checkpoint Oracle Redo Logs
2014-11-12 08:51:01 Seqno 111888, RBA 360514560
SCN 1.3015880562 (7310847858)

Target Extract Trails:

Trail Name                                                            Seqno    RBA Max MB Trail Type

/u01/app/oracle/product/12.1.2/oggcore_1/dirdat/l1 0          29590434 100 EXTTRAIL

You can see from this output that Golden Gate is reading from the redo log sequence number 111888. Golden Gate keeps its own sequence number as you can see the trail file is in sequence 0.

You can get the same information from the following command

GGSCI (source server) 13> Send extract extvmed1 status

Sending STATUS request to EXTRACT EXTVMED1 …

EXTRACT EXTVMED1 (PID 31578)
Current status: Recovery complete: At EOF

Current read position:
Redo thread #: 1
Sequence #: 111888
RBA: 371282432
Timestamp: 2014-11-12 09:10:37.000000
SCN: 1.3015888404
Current write position:
Sequence #: 0
RBA: 29592508
Timestamp: 2014-11-12 09:10:35.051249
Extract Trail: /u01/app/oracle/product/12.1.2/oggcore_1/dirdat/l1

Checking the Current Oracle Online Redo Log

Once we know what sequence number is being read by the extract process we might want to check if it is the current log file being written to by the database. The following query will do this for us.

sequence# Oracle is processing.
SELECT
A.MEMBER,
B.SEQUENCE#,
B.STATUS,
B.ARCHIVED
FROM V$LOGFILE A, V$LOG B
WHERE A.GROUP# = B.GROUP#;

MEMBER                 SEQUENCE# STATUS ARC
/oralogs/redo01.log 111887 INACTIVE YES
/oralogs/redo02.log 111888 CURRENT NO
/oralogs/redo03.log 111886 INACTIVE YES

We can see now that Golden Gate is processing the current log file so all is good and there is no lag.

If there was lag, then Golden Gate would be processing a log file with a sequence number less than the current log file, and would therefore be reading from an archived log file. To see which archive log file it was reading you could use the following query.

SELECT SEQUENCE#,
NAME,
FROM V$ARCHIVED_LOG
WHERE SEQUENCE# = xxx; — The sequence being read by Golden Gate

Obviously if the archivelog was no longer available on the server your extract process would abend. For this reason rethink your archive log retention policy to prevent unnecessary restores.

Monitoring Redo Log Consumption Rate

The Golden Gate stats command lists all the tables that have transactions. You can see how many inserts, updates and deletes have occurred on each table, there is a view of the total, daily and hourly statistics.

GGSCI (source server) 4> stats extvmed1

To get only only statistics from one table use

GGSCI (source server) 12> stats extvmed1 totalsonly VMEDDEV.TABLENAME

To get an overall summary use the following command. This is useful to see that the extract, datapump and replicate are in sync

GGSCI (source server) 13> stats extvmed1 totalsonly VMEDDEV.*
GGSCI (source server) 14> stats dpvmed1 totalsonly VMEDDEV.*
GGSCI (destination server) 1> stats repvmed1 totalsonly VMEDDEV.*

If you see different numbers of operations, then there may be data lost or filtered in the process. For example, in one-way replication, all the stats from the Extract and Replicate side should match. If you have 10 records inserted into the source table but only 9 in target, there must be something wrong.

Monitoring Lags for Each Group

There are many ways to determine the lag for a process. I like to use the lag command

GGSCI (source server) 17> lag extract extvmed1

Sending GETLAG request to EXTRACT EXTVMED1 …
Last record lag 2 seconds.
At EOF, no more records to process.

The command returns one of 3 responses
• At EOF, no more records to process: This means there is no lag. If for some reason the data is not replicating, check the trail file names or the database connection to make sure they’re set up properly.
• Waiting for data to process: In the Extract, Extract process is waiting for the redo
log. In the data pump, Extract is waiting for the Extract trail files. In the Replicat,
Replicate process is waiting for the Data Pump or Extract trail files. The lag may be
the network or disk I/O issues.
• Processing data: The process is busy grinding the data. It’s normal if this status is
temporary and then goes away after a few seconds. If not one of the processes (Extract, Datapump or Replicat) is the bottleneck and should be inestigated.

Recording the Lag Status in the ggserr.log File

To track the lag report frequency in the Manager parameter file using  LAGINFO and LAGREPORT parameters. For example

LAGINFOSECONDS 30
LAGREPORTMINUTES 1

This reports the lag information to the ggserr.log file every 30 seconds, and it checks the lag every minute. You could then check ggserr for the lag reports.

Viewing Event and Error Logs
The view reports <group_name> command shows the most recent errors and events occurring to groups:

GGSCI source> view report extvmed1
GGSCI source> view report dpvmed1
GGSCI both> view report mgr
GGSCI destination> view report repvmed1

The detailed history is in ggserr.log.

 

Golden Gate Replication – Step 3 – The Initial Load

Tags

We have set up an extract process to capture all changes on the schema while we are performing the initial load, see here and a data pump process to ship those changes across to the remote server, see here. Let’s see that they are still running

GGSCI (source) > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING DPVMED1 00:00:00 12:51:15
EXTRACT RUNNING EXTVMED1 00:00:00 00:00:02

Next we will create and start the initial load extract.

Prerequisites for the GoldenGate Initial Load
Before starting the GoldenGate initial load, ensure the following
• You must disable any foreign-key constraints on the target tables to prevent
errors. You may also choose to disable check constraints to speed up performance.
• Disable any triggers on the target tables. Triggers firing while the data is being
loaded can cause errors.
• You may want to drop indexes on your target tables to speed up performance. This
is optional.

We covered these tasks in Step 1 so we’re good to go.

Create in Initial Load Extract

Oracle GoldenGate direct-load requires an initial-load Extract and an initial-load Replicat process. If you have a large amunt of data you can set up multiple Extracts and Replicats to handle different sets of tables and run them in parallel to speed up the GoldenGate initial load. This takes quite a lot of planning as you need to group tables according to their relationships.

In this example we will only set up one  initial-load Extract and one initial-load Replicat process.

Starting on the source database we create the initial load extract parameter file

GGSCI (source) >edit params ILVMED1
Extract ILVMED1
——————————————————————-
— Initial Load extract for VMEDDEV schema
——————————————————————-
USERID ggs_owner, PASSWORD xxx
RMTHOST 192.18.25.70, MGRPORT 7809
RMTTASK Replicat, Group ILVMED2
Table VMEDDEV.*;

Most of these parameters are the same as the other extracts that we defined. As this will be a direct load there are no trail files created, instead the Extract communicates directly with the initial-load Replicat over the network. Records are loaded by the Replicat directly into the target database as the Extract captures them from the source database.

The RMTTASK parameter is used to have Golden Gate automatically start and stop a Replicat on the target server for the direct load. The Replicat keyword tells Golden Gate that the remote task is a Golden Gate Replicat process. The Group parameter is used to specify the name of the Replicat group on the target server, in this case Replicat is ILVMED2. Although we will configure ILVMED2 on our target database, we will not start it, as mentioned above Golden Gate will automatically start and stop the Replicat for us.

Adding the Initial-Load Extract

The next step is to add the initial-load Extract group.

Notice that adding the initial-load Extract is similar to adding the Local and data-pump Extracts with the exception of the SOURCEISTABLE parameter

GGSCI (source) >ADD EXTRACT ILVMED1, SOURCEISTABLE

The SOURCEISTABLE tells Golden Gate that this Extract is used only for initial loading and to capture all the records from the source database for loading into the target database.

Another difference is that there is no need to specify any trail files for the Oracle direct-load Extract.

Before you start the initial-load Extract, you need to configure and add the initial-load Replicat

Configuring the Initial-Load Replicat

As always we begin by creating the parameter file

GGSCI (target) >edit params ILVMED2

REPLICAT ILVMED2
——————————————————————-
— Initial load replicat for VMEDDEV schema
——————————————————————-
USERID ggs_owner, PASSWORD xxx
ASSUMETARGETDEFS
Map VMEDDEV.*, Target VMEDDEV.* ;

The REPLICAT parameter identifies the replicat group.
The USERID is the database username and password to connect to the database.
ASSUMETARGETDEFS tells Golden Gate that the source and target tables are identical. If the database tables being used on the source and target are different, you would need to generate a data-definitions file instead.
The MAP parameter is used to specify the mapping from your source tables to the target tables. The source tables were specified previously in the Extract with the TABLE parameter. In the example, we’re using wildcarding to map all the tables in the VMEDDEV schema on the source database to the same tables in the VMEDDEV schema on the target database.

Adding the Initial-Load Replicat

Before we can use the initial-load Replicat, you need to add the Replicat group.

GGSCI (target) >ADD REPLICAT ILVMED2, SPECIALRUN

The ADD REPLICAT command tells GoldenGate to add the Replicat group. You can use the SPECIALRUN parameter to let GoldenGate know that this is a special Replicat to be used only for the initial load. In the next post we will set up another Replicat for ongoing synchronization.

Now that you have the initial-load Extract and Replicat configured and added to Golden Gate, we can start the load process

Starting the GoldenGate Initial Load

You start the Golden Gate initial-load process by starting the Extract. Remember, it isn’t necessary to start the Replicat, because GoldenGate does this automatically for the initial load when the initial-load Extract is started. Golden Gate only automatically starts the Replicat for an initial-load Replicat, not for the ongoing-change Replicat. When the load is complete, GoldenGate automatically stops the initial load Extract and Replicat.

GGSCI (source) >START EXTRACT ILVMED1

If needed for some reason, you can stop the initial-load Extract using a similar method. For example, you may need to make a change to the Golden Gate parameters on the initial-load Extract. You stop and then start the initial-load Extract for the new changes to take effect. You must remember to check whether any rows have already been loaded to the target table. If so, you need to delete those rows on the target before restarting the extract. Following is an example of stopping the ILVMED1 Extract:

GGSCI (sourceserver) > STOP EXTRACT ILVMED1

Verifying the Initial Load

GGSCI (sourceserver) > info extract ilvmed1

EXTRACT ILVMED1 Last Started 2014-11-10 21:15 Status RUNNING
Checkpoint Lag Not Available
Process ID 12159
Log Read Checkpoint Table VMEDDEV.ACBAUTH
2014-11-10 21:16:38 Record 131668
Task SOURCEISTABLE

You will notice that INFO ALL does not list any Initial Load processes.

You can also use VIEW REPORT ILVMED1 to get more information about the load

You need to wait until the initial load is completely finished before starting the Replicat for ongoing change synchronization. We will see how to do this in the next post.

Golden Gate Replication – Step 2 – The Data Pump Process

Tags

In my previous post see here, we started the extract process which writes to a local trail file. Now we will see how to configure, add, and start the datapump. This is another Extract process. It reads the records in the source trail written by the Local Extract, pumps or passes them over the network to the target, and creates a target or remote trail.

Configuring the Data Pump
From GGSCI, we create the parameter file for the data pump as shown below

GGSCI (sourceserver) 1> edit params DPVMED1
EXTRACT DPVMED1
——————————————————————-
— Data Pump extract for VMEDDEV schema
——————————————————————-
PASSTHRU
RMTHOST 192.18.25.70, MGRPORT 7809
RMTTRAIL /u01/app/gg/product/12.1.2/oggcore_1/dirdat/l2
TABLE VMEDDEV.* ;

EXTRACT
This parameter identified the process as an Extract and identifies the Extract group. You are limited to eight characters for Extract group name.

PASSTHRU
You can specify the PASSTHRU parameter on the data pump if you aren’t doing any filtering or column mapping and your source and target data structures are identical.  Using PASSTHRU improves performance by allowing GoldenGate to bypass looking up any table definitions from the database or the data-definitions file.

RMTHOST
Use RMTHOST and MGRPORT to tell the data-pump Extract the remote server name and the port on which the GoldenGate manager is running. This is the destination location where the data pump sends the local trail file over the TCP/IP network. You should verify that you have network connectivity between your source server and the MGRPORT on the RMTHOST. If needed, you can specify multiple remote hosts in
the same parameter file to send trails to different servers.

RMTTRAIL
You can use RMTTRAIL to specify the two-character remote trail name for your data pump. The path references a location on the remote server. The two characters at the end (l2) will be the remote trail name. Remember that l1 is the local trail. The data pump reads the local l1 trail file and pumps it to the remote or target server as l2. Golden Gate adds a six diget sequence number onto the trail-file name, starting with l2000000, then l2000001, then l2000002, and so on.

TABLE
The TABLE parameter may look familiar by now. This is the same TABLE parameter described for the Local Extract. In the example, you use wildcarding to pump all the tables in the VMEDDEV schema (VMEDDEV.*) to the target server. If needed, you could filter or transform data at this point using the data pump.

Adding the Data Pump
Now that we have set up the data-pump Extract configuration parameters, the next step is to add the datapump Extract group. You can do that using the commands shown in the following example:

GGSCI (sourceserver) > ADD EXTRACT DPVMED1, EXTTRAILSOURCE /u01/app/oracle/product/12.1.2/oggcore_1/dirdat/l1

GGSCI (sourceserver) > ADD RMTTRAIL /u01/app/gg/product/12.1.2/oggcore_1/dirdat/l2, EXTRACT DPVMED1, MEGABYTES 100

The first command, ADD EXTRACT, adds the data-pump Extract using the configuration parameters defined in the parameter file. The EXTTRAILSOURCE parameter tells GoldenGate to use the trail file created by the Local Extract as the source for the data-pump Extract. In the Oracle example, the l1 trail file is the source.

The ADD RMTTRAIL command adds the data-pump Extract remote trail file, assigns it to Extract DPVMED1, and gives it a size of 100MB. The default size of trail files is 10MB. As with the Local Extract, you should size the trail files sufficiently large enough based on the transaction volume so that GoldenGate is not creating new trail files too often and slowing performance.

Starting and Stopping the Data Pump
After adding the data-pump Extract, you need to start it to begin processing records from the source trail file, as shown below:

GGSCI (sourceserver) > START EXTRACT DPVMED1

If needed, you can stop the data-pump Extract using the STOP EXTRACT command. For example, you may need to make a change to the data-pump Extract parameters. To do that, you stop the data-pump Extract, change the parameter file, and then start the data-pump Extract for the new changes to take effect. Here is an example of the STOP EXTRACT command:

GGSCI (sourceserver) > STOP EXTRACT DPVMED1

Verifying the Data Pump

When the data-pump Extract has started, you can verify that it’s running using the INFO EXTRACT command. This is similar to verifying the local Extract, as shown in the following example for the DPVMED1 data-pump Extract:

GGSCI (sourceserver) 2> info extract DPVMED1

EXTRACT DPVMED1 Last Started 2014-11-10 20:16 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:10:33 ago)
Process ID 4763
Log Read Checkpoint File /u01/app/oracle/product/12.1.2/oggcore_1/dirdat/l1000000
First Record RBA 0

You should see a status of RUNNING. If you see a status of STOPPED or ABENDED, there may be a problem. As with the Local Extract, you can review the GoldenGate error-log file and try to resolve the problem. The error file is named ggserr.log and is
located in the GoldenGate software location.

You can see from the INFO command that the data pump status is RUNNING. As described earlier, checkpoint lag is the time delay between the last checkpoint written to the trail and the time when the record was processed by GoldenGate. If you had high checkpoint lag, it could indicate a performance problem or the data-pump Extract catching up on a large volume of changes. Usually, the data-pump Extract in PASSTHRU mode doesn’t experience lag because it’s simply transferring data. You can also see the source of your data pump as the l1 trail created by the local extract, the time of the last read checkpoint, and the RBA of the record in the trail file.

You can add the DETAIL option to the INFO command to see even more information about the datapump Extract. You may find the detail display helpful to see where the important files for the extract are located, such as the parameter and report files. This example shows the detail for the DPVMED1 datapump
Extract:
GGSCI (sourceserver) 3> info ext DPVMED1, detail

Finally, you can run the STATS command on your data-pump Extract. This shows whether the datapump Extract has actually processed any DML activity.

GGSCI (sourceserver) 2> stats ext DPVMED1

After you’ve started the Local Extract and the data-pump Extract, you can run the initial data load from your source to the target databases. You can use either GoldenGate itself or the DBMS vendor load utilities to do the initial data load. The DBMS vendor load utilities are optimized to load the vendor database, so you may find that they perform better than the GoldenGate load. On the other hand, if you
need to do complex transformations or are doing heterogeneous replication, the GoldenGate initial load may work better.

My next post will show how to load the data using GoldenGate.

Golden Gate Replication – Step 1 – The Extract Process

Tags

,

This is the first of a series of posts that will show you how to set up one way replication for one Oracle Database to another. It is assumed that Golden Gate is installed and pre-requisites such as the Golden Gate user has been created.

The first required step for Golden Gate Basic Replication is setting up and starting the Extract process. Remember that you should start the Extract first to begin capturing changes made while the initial data load is running. If you can afford to take down the source database while the initial load is running, then you can start the Extract after the initial load.

Verifying Database-Level Supplemental Logging
When replicating from Oracle databases, you need to enable database-level supplemental logging on the source database. The supplemental logging is required
to ensure that Oracle adds additional information to the redo logs that is required by GoldenGate.

First, you can verify whether Oracle database supplemental logging is already enabled by running the following SQLPLUS command:

SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

SUPPLEMENTAL_LOG_DATA_MIN
————————-
NO

You can enable Oracle database-level supplemental logging by running the following SQLPLUS command as a user with the ALTER SYSTEM privilege:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Enabling Table-Level Supplemental Logging
GoldenGate requires the key values in addition to the changed data in order for the GoldenGate Replicat to apply the changed data to the target database. Adding supplemental logging to the source database tables ensures that the database logs the key values that GoldenGate needs in order to properly apply the updates on
the target database.
You can use the GoldenGate ADD TRANDATA command to force the database to log primary-key columns for all updates on your source database.

Behind the scenes, the ADD TRANDATA command is equivalent to the following command in an Oracle database:

ALTER TABLE VMEDDEV.MEMBERS
ADD SUPPLEMENTAL LOG GROUP GGS_MEMBERS_19387 (MEMBER_ID) ALWAYS;

It can become tiresome entering individual ADD TRANDATA commands so I when I am replicating an entire schema I like to use an OBEY file and generate the statement dynamically from the database, as follows:

set echo off
set verify off
set pagesize 2000
set linesize 250
set trim on
set heading off
set feedback off
spool VMEDDEV.add_trandata.obey

SQL> select ‘add trandata VMEDDEV.’ || table_name
2 from dba_tables where owner = ‘VMEDDEV’;

spool off

Once the script is generated log into GGSCI on the source database and execute it as follows

GGSCI (zagami.ctn.dot.co.za) 2> dblogin userid ggs_owner, PASSWORD Ggs_owner;
Successfully logged into database.

GGSCI (zagami.ctn.dot.co.za as ggs_owner@orcl) 3> obey diroby/VMEDDEV.add_trandata.obey

It’s good practice that when the ADD TRANDATA commands completes, you should verify in SQLPLUS that the supplemental logging has been successfully enabled. You can do this using the following query:

SQL> select owner, log_group_name, table_name
from dba_log_groups where owner = ‘HR’;

Disabling Triggers and Constraints
You need to disable any database triggers or cascade-delete referential integrity constraints on your target tables. The reason is to prevent duplicate changes, because GoldenGate is already replicating the results of the trigger and cascade-delete operations from the source database. If you didn’t disable the constraints and triggers, GoldenGate would replicate those changes; then the triggers and cascade delete
constraints would also fire, causing duplicate changes on the target database.

Starting with GoldenGate version 11, a new SUPPRESSTRIGGERS option is available as part of the Replicat DBOPTIONS parameter, to automatically suppress the triggers from firing on the target. You can use it to avoid having to manually disable the triggers. This option is available for Oracle 10.2.0.5 databases and later, and for Oracle 11.2.0.2 databases and later.

You can also use a SQL script such as the following Oracle database example to automatically generate your ALTER TABLE enable and disable commands for the cascade-delete constraints and triggers. Using a script to dynamically generate the commands from the database saves time and ensures more accuracy.

set echo off
set verify off
set pagesize 2000
set linesize 250
set trim on
set heading off
set feedback off
spool VMEDDEV.disable_cascade_delete_constraints.sql
select ‘alter table ‘||owner||’.’||table_name||
‘ disable constraint ‘||constraint_name||’;’
from dba_constraints
WHERE owner = ‘VMEDDEV’;

spool off

spool VMEDDEV.disable_triggers.sql
select ‘alter trigger ‘||owner||’.’||trigger_name|| ‘ disable ;’
from all_triggers
where owner = ‘VMEDDEV’;
spool off

Verifying the Manager Status

The Manager process manages all of the GoldenGate processes and resources. Before you can start the Extract and Replicat processes, you need to verify that the GoldenGate Manager process is running on the source and target servers.

For the basic replication configuration, the Manager parameter file on the source and target servers needs to contain the port number, as shown in the following example:

GGSCI (zagami.ctn.dot.co.za as ggs_owner@orcl) 348> edit param mgr

——————————————————————-
— GoldenGate Manager
——————————————————————-
port 7809

You can use the INFO MGR command to verify that the Manager is running:

GGSCI (zagami.ctn.dot.co.za as ggs_owner@orcl) 348> info mgr

Manager is running (IP port zagami.ctn.dot.co.za.7809, Process ID 4089).

If the GoldenGate Manager isn’t running, you can start it using the following command:

GGSCI (zagami.ctn.dot.co.za as ggs_owner@orcl) 348>  start manager

Configuring the Local Extract
Now that youfve made sure the Manager is running, let’s configure the Local Extract. In order to do this, you first need to create a parameter file for the Extract. In this example we are configuring the local Extract to capture all the SQL DML changes from the VMEDDEV schema.
To edit in GGSCI, you enter EDIT PARAMS extvmed1.

GGSCI brings up the parameter file in the default text editor. The file will be created under the $GG_HOME/dirprm folder.

GGSCI (zagami.ctn.dot.co.za as ggs_owner@orcl) 350> edit params extvmed1

EXTRACT EXTVMED1
——————————————————————-
— Local extract for VMEDDEV schema
——————————————————————-
USERID ggs_owner, PASSWORD xxxx
EXTTRAIL /u01/app/oracle/product/12.1.2/oggcore_1/dirdat/l1
TABLE VMEDDEV.*;

You can use two dashes (–) at the beginning of a line in the parameter file for comments. The order of the Extract parameters listed is important. For example, the EXTRACT parameter must be the first entry in the parameter file, and the EXTTRAIL parameter must precede any associated TABLE statements.

Trail files are staging files used to store the committed transactions. In the
example, the trail file holds the VMEDDEV transactions extracted from the transaction log by the Local Extract. This file will be read by the Data Pump process which will send the transactions over the network to the remote trail files.

The TABLE parameter is used to specify from which source database tables you wish GoldenGate to extract changes. The TABLE parameter is a complex parameter; it has many options that allow you to filter rows, map columns, transform data, and so on. For now, we’ll keep it simple and just instruct GoldenGate to extract all of the table data changes for the VMEDDEV schema. You can do this easily by using a wildcard character, the asterisk (*), after the schema name.

Adding the Extract
Now that we’ve set up the Extract configuration parameters, the next step is to add the Extract group on the Source Database.
You can do that using the following commands from GGSCI:
GGSCI  >  ADD EXTRACT EXTVMED1, TRANLOG, BEGIN NOW
GGSCI  >  ADD EXTTRAIL /u01/app/oracle/product/12.1.2/oggcore_1/dirdat/l1, EXTRACT EXTVMED1, MEGABYTES 100

The first command, ADD EXTRACT, adds the Extract using the configuration parameters defined in the last section. After the Extract is added, it establishes checkpoints in the source trail file and on the database transaction log to keep track of processing. The TRANLOG parameter of the ADD EXTRACT command tells GoldenGate to use the database transaction log as its source. In the Oracle example, the
redo logs are the source. BEGIN NOW tells Extract to begin processing changes from the source database as soon as the Extract is started. Optionally, you can also instruct Extract to begin capturing changes at a specific timestamp or using a specific trail-file number.
The ADD EXTTRAIL command adds the local Extract trail file, assigns it to Extract EXTVMED1, and gives it a size of 100MB. The default size of trail files is 10MB. You should size the trail files sufficiently large enough based on the transaction volume so that GoldenGate isnft creating new trail files too often and slowing performance.

Starting and Stopping the Extract
After adding the Extract, you need to start it to begin capturing changes, as shown in the following example:
GGSCI (sourceserver) > START EXTRACT EXTVMED1

If needed, you can stop the Extract using a similar method. For example, you may need to make a change to the GoldenGate parameters. In this case, you would stop the Extract, make your changes to the parameters, and then start the Extract for the new changes to take effect. Here is an example of how to stop the EXTVMED1 Extract:
GGSCI (sourceserver) > STOP EXTRACT EXTVMED1

Verifying the Extract
When the Extract has started, you can verify that it’s running using the INFO EXTRACT command. You should see a status of RUNNING. If you see a status of STOPPED or ABENDED, there may be a problem. In the following example, youfre checking the status of the EXTVMED1 Extract:

GGSCI (sourceserver) 2> info extract EXTVMED1
EXTRACT EXTVMED1 Last Started 2014-11-10 19:34 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:00 ago)
Process ID 31578
Log Read Checkpoint Oracle Redo Logs
2014-11-10 19:37:00 Seqno 111885, RBA 14273536
SCN 1.3014610643 (7309577939)

If the Extract isn’t running, you can review the GoldenGate error log file and try to resolve the problem. Often, the error is something simple like incorrect spelling. The error file is named ggserr.log and is located in the GoldenGate software installation directory location.
You can see from the INFO command that the local Extract is RUNNING. Checkpoint lag is the time delay between the last checkpoint written to the trail and the time when the record was processed by GoldenGate. There is currently no checkpoint lag. If you had a high checkpoint lag, it could indicate a performance problem or perhaps the local Extract just catching up on a large volume of changes.

You can also see that Oracle redo logs are the source for the Extract, the last read checkpoint time, the transaction log sequence number, and the relative byte address (RBA) of the record in the transaction log.
You can add the DETAIL option to the INFO command to see even more information about the Extract. You may find the detail display helpful to see where the important files for the Extract are located, such as the parameter and report files. The following example uses the INFO command to display details about the EXTVMED1 Local Extracth
GGSCI (sourceserver) 3> info ext EXTVMED1, detail

Finally, you can run the stats command on your Extract. This shows whether the Extract has actually processed any DML changes.

GGSCI (sourceserver) 2> stats ext EXTVMED1

So thats it for step one, next post will be to  start the datapump Extract see here.

Summarised Steps for Basic Replication with Golden Gate

Tags

, ,

This post is to simply list the four steps recommended to implement One Way replication using Golden Gate. These steps assume a zero-downtime replication configuration.  If you can quiesce or stop DML activity against the
source database while the initial load is occurring, it is not necessary to start the Extract before the initial load.

It is assumed that the Golden Gate Installation and pre-requisites have been met and you are planning your first replication.

Each of the steps will be blogged about in detail in follow up posts.

Step 1. Start the Extract.

During this step you need to configure and start the Extract to begin capturing database changes. This extract will capture any DML changes that occur while the initial load is executing, so it needs to start before the initial load.

The Extract captures the database changes from the source database-transaction log and writes them to a source trail file.

Step 2. Start the data pump.

The Data Pump process needs to be configured and started. It will be responsible for reading the source trail file that hosts the changes captured by the extract created in Step 1, and sending the changes across the network to the target server where the changes are written to a remote trail file.

Step 3. Perform the initial load.

Once the Extract and Data Pump process have started you would then load the data using either GoldenGate or other utility, such as RMAN, Oracle Datapump etc.

Step 4. Start the Replicat Process

Configure and start the Replicat process to begin applying changes from the remote trail file.  These changes that were captured while the initial-load process was executing should have already been written to the remote trail file by the Data Pump process, but it may take a while to apply the changes, depending on how busy the database was during the load. When the Replicat has caught up and applied all the changes, the databases are synchronized.

That’s it! My next post will discuss Step 1 in more detail

Green on Golden Gate

Tags

, , ,

I’ve been having some fun getting to know Golden Gate this week-end, and thought I’d share a few of the issues I stumbled across during my first encounter. I’ve had no formal Golden Gate training so for most of the exercise I followed Gavin Soormas excellent tutorial which can be found here

I will be using Golden Gate to replicate all tables in the VMEDDEV schema in our OLTP database to the data warehouse. Both databases are Oracle databases running on Linux hosts. The source database version is Oracle 11.1 and the destination is Oracle 11.2.

I opted to go with the latest version of Golden Gate 12c. It was very simple to install using an old friend ./runInstaller

The installation ran without any problems and the first thing required after install was to execute to command line tool ggsci and check if the manager process was running.

I navigated to the Golden Gate Home and tried ./ggsci but I was presented with the following error

./ggsci: error while loading shared libraries: libnnz11.so: wrong ELF class: LFCLASS32

This was Golden Gate telling me it could not find the libnnz11.so. To get around this problem I set the LD_LIBRARY_PATH to point to $ORACLE_HOME/lib:$ORACLE_HOME/lib32

My next problem arose when trying to use Golden Gate to perform the initial load of the schema. I had used Oracle DataPump to create the empty tables on the destination, and created the extract parameter file according to Gavins example but when checking on the status of the extract process it was not moving from STARTING. I viewed the report and noticed many of the following warnings

WARNING OGG-01223 TCP/IP error 104 (Connection reset by peer), endpoint

This was due to the port number that the extract was using was not open for connections. To verify this I used the following check

nc 192.18.25.70 7821< /dev/null; echo $?

This will output 0 if port 7821 is open, and 1 if it’s closed.

The command returned 1 so I had to configure DYNAMICPORTLIST in the mgr parameter file and set the port range to 9100-9300.

I restarted the MGR process and EXTRACT process and the status changed from STARTING, to RUNNING then ABENDED.

The report showed the following error

OGG-01746 Support for parameter SUPPRESSTRIGGERS is not available in the RDBMS version you are using.

After reading numerous articles on the web I managed to get past this error by disabling both triggers and constraints on the destination.

select ‘ALTER TRIGGER ‘ || owner || ‘.’ || object_name ||’ disable;’
from dba_objects where object_type=’TRIGGER’ and owner = ‘VMEDDEV’;

spool /u01/app/gg/exports/vmed_constraints.sql
select ‘alter table vmeddev.’ || table_name || ‘ disable constraint ‘ || constraint_name ||’;’
from dba_constraints where owner = ‘VMEDDEV’;

I restarted the extract and viola, it completed successfully.

I had no problems setting up the checkpoint table and getting the replication started and was very impressed with the sub-second lag time so went on to configure the DDL setup.

The configuration was simple enough but when I tested it the DDL was not replicating. It eventually dawned on me – around about dawn – that I had to restart the manager process, after I did so everything worked as expected and retired happily to sleep the morning away.