Oracle Essentials Oracle Database 11g (55 page)

BOOK: Oracle Essentials Oracle Database 11g
8.41Mb size Format: txt, pdf, ePub

Complete Site Failure

|

283

Oracle Data Guard management

The Oracle Data Guard broker provides monitoring and control for physical and logical standby databases and components. A single command can be used to perform failover. Oracle Enterprise Manager provides a Data Guard Manager GUI for setting up, monitoring, and managing the standby database. SQL*Plus also provides an interface for Data Guard SQL statements and initialization parameters as of Oracle Database 11
g
.

The Oracle Database 10
g
Data Guard broker added support for creating and managing configurations containing RAC primary and standby databases. The Data Guard broker now leverages the Cluster Ready Services.

Possible Causes of Lost Data with a Physical Standby Database
There is a possibility that youwill lose data, even if youuse a physical standby database. There are three possible causes of lost data in the event of primary site failure:

• Archived redo logs have not been shipped to the standby site.

• Filled online redo logs have not been archived yet.

• The current online redo log is not a candidate for archiving until a log switch occurs.

These three potential problems are addressed in different ways, as described in the following sections.

Copying archived redo logs to a standby site

Prior to Oracle8
i
, copying of archived redo logs from the primary to the standby site was not automated. You were free to use any method to copy the files across the network. For example, you could schedule a batch job that copies archived logs to the standby site every
N
minutes. If the primary site fails, these copies would limit the lost redo information (and therefore the lost data) to a maximum of
N
minutes of work plus whatever was in the currently active log.

Oracle8
i
first provided support for the archiving of redo logs to a destination on the primary server as well as on multiple remote servers. This feature automates the copying and application of the archived redo logs to one or more standby sites. The lost data is then limited to the contents of any filled redo logs that have not been completely archived, as well as the current online redo log. Oracle also automatically applies the archived redo logs to the standby database as they arrive.

Oracle9
i
added the option to specify zero data loss to a standby machine. In this mode, all changes to a local log file are written synchronously to a remote log file.

This mode guarantees that switching over to the standby database will not result in any lost data. As youmight guess, this mode may impact performance, as each log write must also be completed to a remote log file. Oracle provides an option that will
284

|

Chapter 11: Oracle and High Availability

wait to write to a remote log only for a specified period of time, so that a network failure will not bring database processing to a halt.

If some data loss is allowable within certain limits, Oracle Database 11
g
enables a fast-start failover to occur provided that redo loss exposure does not exceed the limits the administrator sets. As noted earlier, a physical standby database is also more useful as of Oracle Database 11
g
since it is now possible to query the standby while redo apply is active.

Unarchived redo information and the role of geo-mirroring
If you require primary site failure not to result in the loss of
any
committed transactions, and do not choose to use the zero data loss option of Data Guard, the solution is to mirror all redo log and control file activity from the primary site to the standby site.

Youcan provide this level of reliability by using a remote mirroring technology sometimes known as
geo-mirroring
. Essentially, all writes to the online redo log files and the control files at the primary site must be mirrored synchronously to the standby site. For simplicity, youcan also geo-mirror the archived log destination, which will duplicate the archived logs at the remote site, in effect copying the archived redo logs from the primary to the standby site. This approach can simplify operations; you use one solution for all the mirroring requirements, as opposed to having Oracle copy the archived logs and having geo-mirroring handle the other critical files.

Geo-mirroring of the online redo logs results in every committed transaction being written to both the online redo log at the primary site and the copy of the online redo log at the standby site. This process adds some time to each transaction for the mirrored write to reach the standby site. Depending on the distance between the sites and the network used, geo-mirroring can hamper performance, so you should test its impact on the normal operation of your database.

Geo-mirroring provides the most complete protection against primary site failure and, accordingly, it is a relatively expensive solution. You will need to weigh the cost of the sophisticated disk subsystems and high-speed telecommunication lines needed for nonintrusive geo-mirroring against the cost of losing the data in any unarchived redo logs and the current online redo log. See
Appendix B
for where to find more information about geo-mirroring.

Data Redundancy Solutions

Redundant data is another option for dealing with primary site failure. Implementing a redundant data approach differs from using a standby database, which duplicates the entire primary database. Data redundancy is achieved by having a copy of your critical data in an entirely separate Oracle database with a different structure.

Data Redundancy Solutions

|

285

The data, not the database itself, is redundant. If the primary site fails, users can continue working using the redundant data in the secondary database.

Oracle provides automated synchronous and asynchronous data-replication features to support data redundancy. For simplicity, in the following sections we’ll examine replication using a simple two-site example—a primary and a secondary. Oracle can, however, perform
N
-way or multimaster replication involving more than two sites with all sites replicating to all others.

Data Replication—Synchronous and Asynchronous

Whenever youhave a data replication scenario, youalways have a primary site, from which the replication originates, and a secondary site, which is the recipient of the data replication. (In a multimaster scenario, you can have more than one master site, and a single machine can be a master for one replication plan and a secondary site for another.) When youdesign your replication plan, youmust consider the degree to which data at the secondary site can differ for a period of time from the data at the primary site. This difference is referred to as
data divergence
. When youimplement replication, Oracle generates triggers on all specified tables. These triggers are fired as part of the primary site transactions. The triggers either update the secondary site’s data as part of the same transaction (
synchronous replication
) or place an entry in a deferred transaction queue that will be used later to update the secondary site (
asynchronous replication
).

Oracle’s replication capabilities are delivered today in the Oracle Streams product.

Streams includes log-based replication and Advanced Queues and is covered in more

detail in Chapter 13.

Key considerations in setting up a replication environment include the following:
Tolerance for data divergence

The smaller the data divergence, the more individual replication actions will have to be performed. You will reduce the resources needed to implement the replication by increasing the data divergence.

Performance requirements

Since replication requires resources, it can have an impact on performance.

However, since Oracle Database 10
g
, Oracle Streams can capture change data from log files, which greatly reduces the performance impact of replication on an active database.

Network bandwidth

Since replication uses network bandwidth, you have to consider the availability of this resource.

Distance between sites

The more distance between sites, the longer the physical transfer of data will take and the longer each application will take.

286

|

Chapter 11: Oracle and High Availability

Site and network stability

If a site or a network goes down, all replications that use that network or are destined for that site will not be received. When either of these resources comes back online, the stored replication traffic can have an impact on the amount of time it takes to recover the site.

Experience level of your database administrators

Even the most effective replication plan can be undone by DBAs who aren’t familiar with replication.

Figure 11-11 illustrates synchronous and asynchronous replication.

Primary Database

Secondary Database

TABLE X

TABLE X

1

2

Transaction

Synchronous

Replication

4

3

TIME

Primary Database

Secondary Database

Deferred

TABLE X

1

2

Transaction Queue

TABLE X

Transaction

Place in Queue

Asynchronous

Replication

4

3

Figure 11-11. Oracle replication for redundant data

Synchronous, or real-time, replication can be used when there is no tolerance for data divergence or lost data. The data at the secondary site
must
match the primary site at all times and reflect all committed transactions. Each transaction at the primary site will fire triggers that call procedures at the secondary site to reproduce the transaction. Synchronous replication uses distributed transactions that will add overhead to every transaction at the primary site. Whether this additional overhead is acceptable will clearly depend on your specific requirements. Synchronous replication introduces system interdependencies—the secondary site and the network connecting the sites must be up or the primary site will not be able to perform transactions.

You can also use asynchronous, or deferred, replication to provide redundant data.

With asynchronous replication, transactions are performed at the primary site and replicated some time later to the secondary site. Until the deferred transaction queue is “pushed” to the secondary site, replicating the changes, the data at the secondary
Data Redundancy Solutions

|

287

site will differ from the primary site data. If the primary database is irrevocably lost, any unpushed transactions in the deferred queue will also be lost.

The extent of the data divergence and potential data loss resulting from the divergence is a very important consideration in configuring asynchronous replication. In addition, asynchronous replication allows the primary site to function when the network or the secondary site is down, while synchronous replication
requires
that the secondary site be available. Asynchronous replication adds overhead to transactions at the primary site, so once again, you’ll need to carefully consider throughput requirements and perform appropriate testing. Typically, asynchronous replication adds less overhead than synchronous replication, since the replication of changes can be efficiently batched to the secondary site. However, asynchronous replication will still add some overhead to the operation of the primary site, so youshould consider and test the effect of both types of replication on your database environment.

Old-Fashioned Data Redundancy

You can also achieve data redundancy using Oracle’s standard utilities. Historically, one of the most common backup methods for Oracle was simply to export the contents of the database into a file using the Oracle Export utility. This file could then be shipped in binary form to any platform Oracle supports and subsequently imported into another database with Oracle’s Import utility. This approach can still provide a simple form of data redundancy if the amount of data is manageable.

Oracle 7.3 introduced a
direct pathexport
feature that runs about 70 percent faster than a traditional export. The direct path export avoids some of the overhead of a normal export by directly accessing the data in the Oracle datafiles. Oracle Database 10
g
and newer database releases provide a much higher speed export/import than in the earlier Oracle version. This latest version, often called the
Data Pump
, is about 60 percent faster for export and 15 to 20 times faster for import per stream.

Another export option is to unload data from the desired tables into simple flat files by spooling the output of a SELECT statement to an operating system file. You can then ship the flat file to the secondary site and use Oracle’s SQL*Loader utility to load the data into duplicate tables in the secondary database. For cases in which a significant amount of data is input to the primary system using loads, such as in a data warehouse, a viable disaster-recovery plan is simply to back up the load files to a secondary site on which they will wait, ready for reloading to either the primary or secondary site, should a disaster occur.

While these methods may seem relatively crude, they can provide simple data redundancy for targeted sets of data. Transportable tablespaces can also be used to move entire tablespaces to a backup platform. Transportable tablespaces in Oracle Database 10
g
and newer releases let youtransport tablespaces from one type of system to another, increasing their flexibility for implementing redundancy, moving large amounts of data, and migrating to another database platform.

Other books

Nobody’s Hero by j. leigh bailey
Star by V. C. Andrews
Shadow on the Moon by Connie Flynn
Bad Moon On The Rise by Katy Munger
Deal to Die For by Les Standiford
Dishonour by Black, Helen
The Basingstoke Chronicles by Robert Appleton
Fairy Tale by Jillian Hunter