Sunday, December 27, 2015

02. Creating Exception Handler Table

Creating a GoldenGate Exception Handler to Trap and Log Oracle Errors:

GoldenGate does not provide a standard exceptions handler. By default, a Replicat process will abend should any operational failure occur, and will rollback the transaction to the last known checkpoint. This may not be ideal in a production environment.

The HANDLECOLLISIONS and NOHANDLECOLLISIONS parameters can be used to control whether or not Replicat tries to resolve duplicate-record and missing-record errors, but should these errors be ignored?

The way to determine what error has occurred, by which Replicat, caused by what data, create an Exceptions handler.

Steps
The steps below create an Exceptions handler that will trap and log the specified Oracle error(s), but allow the Replicat to continue to process data:

1. The first step is to create an Exceptions table, similar to the example DDL below:

create table ggs_admin.exceptions
( rep_name varchar2(8)
, table_name varchar2(61)
, errno number
, dberrmsg varchar2(4000)
, optype varchar2(20)
, errtype varchar2(20)
, logrba number
, logposition number
, committimestamp timestamp
);

ALTER TABLE ggs_admin.exceptions ADD (
  CONSTRAINT PK_CTS
 PRIMARY KEY
 (logrba, logposition, committimestamp) USING INDEX PCTFREE 0 TABLESPACE MY_INDEXES);

The Exceptions table must be created in the GoldenGate Admin user schema. It can log exception data for all Replicat processes.

2. Edit each Replicat process parameter file and add the exception handler Macro code block.

[oracle@linuxserver1 ggs]$ ggsci

GGSCI (linuxserver1) 1> edit params RTARGET1

-- This starts the macro
MACRO #exception_handler
BEGIN
, TARGET ggs_admin.exceptions
, COLMAP ( rep_name = "RTARGET1"
, table_name = @GETENV ("GGHEADER", "TABLENAME")
, errno = @GETENV ("LASTERR", "DBERRNUM")
, dberrmsg = @GETENV ("LASTERR", "DBERRMSG")
, optype = @GETENV ("LASTERR", "OPTYPE")
, errtype = @GETENV ("LASTERR", "ERRTYPE")
, logrba = @GETENV ("GGHEADER", "LOGRBA")
, logposition = @GETENV ("GGHEADER", "LOGPOSITION")
, committimestamp = @GETENV ("GGHEADER", "COMMITTIMESTAMP"))
, INSERTALLRECORDS
, EXCEPTIONSONLY;
END;
-- This ends the macro

3. Remaining within the editor (vi), edit the MAP statements to include the #exception_handler(). Also add the REPERROR parameter to reference to the Oracle error(s) you wish to trap.

REPERROR (DEFAULT, EXCEPTION)
REPERROR (DEFAULT2, ABEND)
REPERROR (-1, EXCEPTION)
MAP SRC.ORDERS, TARGET TGT.ORDERS;
MAP SRC.ORDERS #exception_handler()
MAP SRC.ORDER_ITEMS, TARGET TGT.ORDER_ITEMS;
MAP SRC.ORDER_ITEMS #exception_handler()
MAP SRC.PRODUCTS, TARGET TGT.PRODUCTS;
MAP SRC.PRODUCTS #exception_handler()

The REPERROR parameter controls how the Replicat process responds to errors when executing the MAP statement.
The DEFAULT argument sets a global response to all errors except those for which explicit REPERROR statements are specified.

E.g. A MAP statement to trap ORA-01403: "no data found"  error.

MAP SRC.ORDERS, TARGET TGT.ORDERS, REPERROR (-1403, EXCEPTION);

The DEFAULT2 argument specifies a "catch all" action for any unanticipated Oracle errors that may occur. In the example in step 3, the Replicat process will Abend.

4. Stop and start the Replicat process.

GGSCI (linuxserver1) 3> stop REPLICAT RTARGET1

Sending STOP request to REPLICAT RTARGET1 ...
Request processed.

GGSCI (linuxserver1) 4> start replicat RTARGET1

Sending START request to MANAGER ...
REPLICAT RTARGET1 starting

5. Check Replicat process is running.

GGSCI (linuxserver1) 5> info all

Program     Status Group       Lag           Time Since Chkpt

MANAGER     RUNNING
REPLICAT    RUNNING RTARGET1    00:00:00      00:00:22

6. Start your application and begin replicating data.

Viewing Exceptions
Below is an example of the data collected following an ORA-00001: "unique constraint violated"


SQL> select * from ggs_admin.exceptions where rownum <= 1;

REP_NAME TABLE_NAME ERRNO DBERRMSG
-------- ---------- ----- --------
RTARGET1 SRC.ORDERS     1 OCI Error ORA-00001: unique constraint (TGT.PK_ORD) violated (status = 1), SQL
                         
OPTYPE ERRTYPE LOGRBA LOGPOSITION COMMITTIMESTAMP
------ ------- ------ ----------- -------------------------
INSERT DB         988   171211460 02-APR-10 12.41.42.999468



Myself ( http://www.vitalsofttech.com/troubleshoot-oracle-goldengate-using-exception-handler/)



Create Exception table on Target server :

01) Create Table :

 CREATE TABLE GGS_OWNER.GGS_EXCEPTIONS
(
  REP_NAME         VARCHAR2(8 BYTE),
  TABLE_NAME       VARCHAR2(61 BYTE),
  DML_DATE         TIMESTAMP(6),
  ERRNO            NUMBER,
  DBERRMSG         VARCHAR2(4000 BYTE),
  OPTYPE           VARCHAR2(20 BYTE),
  ERRTYPE          VARCHAR2(20 BYTE),
  LOGRBA           NUMBER(20)                   NOT NULL,
  LOGPOSITION      NUMBER(20)                   NOT NULL,
  COMMITTIMESTAMP  TIMESTAMP(6)                 NOT NULL,
  GGS_FILENAME     VARCHAR2(500 BYTE),
  CDRFAIL          NUMBER,
  CDRSUC           NUMBER,
  CDRDETECT        NUMBER
)
TABLESPACE GGS_DATA;


02 ) create unque Index :

CREATE UNIQUE INDEX GGS_OWNER.EXCEPTIONS_PK ON GGS_OWNER.GGS_EXCEPTIONS
(LOGRBA, LOGPOSITION, COMMITTIMESTAMP)




03)

ALTER TABLE GGS_OWNER.GGS_EXCEPTIONS ADD (
  CONSTRAINT EXCEPTIONS_PK
 PRIMARY KEY
 (LOGRBA, LOGPOSITION, COMMITTIMESTAMP)
    USING INDEX GGS_OWNER.EXCEPTIONS_PK);




04) Replicate process should be :


REPLICAT rep3
ASSUMETARGETDEFS
SETENV (ORACLE_HOME = "/u01/app/oracle/product/11.2.0/db_1")
SETENV (ORACLE_SID = "RNDT")
USERID ggs_owner, PASSWORD ggs_owner
--EOFDELAY 60
DISCARDFILE /u01/oracle/ggs/dirrpt/discard.txt, APPEND, MEGABYTES 100
REPERROR (DEFAULT, EXCEPTION)
REPERROR (DEFAULT2, ABEND)
REPERROR (-1, EXCEPTION)
ASSUMETARGETDEFS
MAP SCOTT.*, TARGET SCOTT.*;
MAP ULTIMUSISL.*, TARGET ULTIMUSISL.*;

---new exception handler
MACRO #exception_handler
BEGIN
-- Use the same Golden Gate Exceptions Table for all exception records
, TARGET GGS_OWNER.GGS_EXCEPTIONS
, COLMAP ( rep_name = @GETENV("GGENVIRONMENT", "GROUPNAME")
, TABLE_NAME = @GETENV ("GGHEADER", "TABLENAME")
, ERRNO = @GETENV ("LASTERR", "DBERRNUM")
, DBERRMSG = @GETENV ("LASTERR", "DBERRMSG")
, OPTYPE = @GETENV ("LASTERR", "OPTYPE")
, ERRTYPE = @GETENV ("LASTERR", "ERRTYPE")
, LOGRBA = @GETENV ("GGHEADER", "LOGRBA")
, LOGPOSITION = @GETENV ("GGHEADER", "LOGPOSITION")
, COMMITTIMESTAMP = @GETENV ("GGHEADER", "COMMITTIMESTAMP")
, GGS_FILENAME = @GETENV("GGFILEHEADER", "FILENAME")
, CDRFAIL = @GETENV("DELTASTATS","CDR_RESOLUTIONS_FAILED")
, CDRSUC = @GETENV("DELTASTATS","CDR_RESOLUTIONS_SUCCEEDED")
, CDRDETECT = @GETENV("DELTASTATS","CDR_CONFLICTS"))
, INSERTALLRECORDS
, EXCEPTIONSONLY;
END;
--REPERROR (DEFAULT, EXCEPTION)
--REPERROR (DEFAULT2, ABEND)
--REPERROR (-1, EXCEPTION)
MAP ULTIMUSISL.* #exception_handler();
MAP SCOTT.* #exception_handler();





01.OGG : Handling GoldenGate Exceptions and Errors with REPERROR

source


Handling GoldenGate Exceptions and Errors with REPERROR

We can use the REPERROR parameter in the Replicat parameter file to control the way that the replication process responds to or handles any errors encountered in any of the DML statements which it is trying to process.
We can use the keyword DEFAULT to set a global response for all errors except those for which explicit REPERROR statements have been specified.
In the example we will see how we are handling the ORA-00001: unique constraint violated error using an exception handler specified via theREPERROR (-1, EXCEPTION) clause of the Replicat parameter file.
By default, if the replicat process encounters any error condition it will abend.
The example shows how by using an exception handler, replicat process does not abend, but handles the exceptions appropriately and continues processing.
If we have a primary key defined on both the source and target tables and if a unique key violation does happen, then neither the Extract or Replicat process gets affected and processing is not halted on either end of the Goldengate environment.
But if we have a case where say there are duplicate key values on the source table, but on the target table there is a primary or unique constraint in place. When these duplicate rows get propagated to the target server via the extract trail and when the Replicat process does encounter these row violations of the primary key constraint in place on the target table, the replicat process will abend.
However, we can use the REPERROR parameter to specify how we will handle this specific error (or any other error or all errors).
In our example, we have created an ‘exception’ table and the exception handler is to write information about any such rows which are violating the unique constraint to this exception table AND continue processing without abending the replicat process.
Without the exception handler in place, we will first see that the Replicat process terminates or abends.
SQL> /
Enter value for 1: 4001
Enter value for 2: KERRY
old   3:  (&1,'&2')
new   3:  (4001,'KERRY')
insert into emp
*
ERROR at line 1:
ORA-00001: unique constraint (IDIT_PRD.PK_EMP) violated




GGSCI (indb02) 5> info replicat myrep

REPLICAT   MYREP     Last Started 2011-04-15 12:18   Status ABENDED
Checkpoint Lag       00:00:00 (updated 00:00:17 ago)
Log Read Checkpoint  File ./dirdat/gg000016
                     2011-04-15 12:19:30.219092  RBA 1825

Using the VIEW REPORT command, we can see why the process has abended.
2011-04-15 12:22:27  WARNING OGG-01004  Aborted grouped transaction on 'IDIT_PRD.EMP', Database error 1 (ORA-00001: unique constraint (IDIT_PRD.PK_EMP) violated).

2011-04-15 12:22:27  WARNING OGG-01003  Repositioning to rba 1825 in seqno 16.

2011-04-15 12:22:27  WARNING OGG-01154  SQL error 1 mapping IDIT_PRD.EMP to IDIT_PRD.EMP OCI Error ORA-00001: unique constraint (IDIT_PRD.PK_EMP) violated (status = 1), SQL .
Now let us put in an exception handler into our replicat parameter file.
REPLICAT myrep
SETENV (NLS_LANG="AMERICAN_AMERICA.WE8ISO8859P1")
SETENV (ORACLE_SID=GGDB2)
ASSUMETARGETDEFS
USERID idit_prd,PASSWORD idit_prd
REPERROR (-1, EXCEPTION)
MAP idit_prd.emp, TARGET idit_prd.emp;
INSERTALLRECORDS
MAP idit_prd.emp, TARGET idit_prd.emp_exception,
EXCEPTIONSONLY,
COLMAP (USEDEFAULTS,
optype = @GETENV ("lasterr", "optype"),
dberr = @GETENV ("lasterr", "dberrnum"),
dberrmsg = @GETENV ("lasterr", "dberrmsg"));
So the REPERROR (-1, EXCEPTION) means that when we encounter the ORA-00001 error, the exception handler will kick in.
The EXCEPTIONSONLY clause defines that such exceptions will be logged in the EMP_EXCEPTIONS table, which has all the columns of the EMP table, plus additional columns which will log information about row violations in the OPTYPE, DBERR and DBERRMSG columns.
So now we go and insert some duplicate rows on the source table.
SQL> insert into emp
  2   values
  3   (&1,'&2');
Enter value for 1: 1001
Enter value for 2: GREG
old   3:  (&1,'&2')
new   3:  (1001,'GREG')

1 row created.

SQL> commit;

Commit complete.
We see that on the target, this time the replicat process has not abended and continues to do the processing.
GGSCI (indb02) 26> info replicat myrep

REPLICAT   MYREP     Last Started 2011-04-15 12:26   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:00 ago)
Log Read Checkpoint  File ./dirdat/gg000016
                     2011-04-15 12:26:57.214525  RBA 2216
But the table EMP_EXCEPTIONS has been populated with the information about the duplicate rows which has caused the ORA-00001 error.
SQL> select * from emp_exception;


     EMPNO ENAME      OPTYPE               DBERR                DBERRMSG
---------- ---------- -------------------- -------------------- --------------------
      1001 GREG       INSERT               1                    OCI Error ORA-00001:

Have a look at this link which has another good example of error handling and also describes how to create and use a macro in GoldenGate.

Saturday, December 26, 2015

OEM 13C Installation Steps

OGG : Handling Processing Errors


  source


15.1 Overview of Oracle GoldenGate Error Handling

Oracle GoldenGate provides error-handling options for:
  • Extract
  • Replicat
  • TCP/IP

15.2 Handling Extract Errors

There is no specific parameter to handle Extract errors when DML operations are being extracted, but Extract does provide a number of parameters that can be used to prevent anticipated problems. These parameters handle anomalies that can occur during the processing of DML operations, such as what to do when a row to be fetched cannot be located, or what to do when the transaction log is not available. The following is a partial list of these parameters.
  • FETCHOPTIONS
  • WARNLONGTRANS
  • DBOPTIONS
  • TRANLOGOPTIONS
To handle extraction errors that relate to DDL operations, use the DDLERROR parameter.
For a complete parameter list, see Reference for Oracle GoldenGate for Windows and UNIX.

15.3 Handling Replicat Errors during DML Operations

To control the way that Replicat responds to an error during one of its DML statements, use the REPERROR parameter in the Replicat parameter file. You can use REPERROR as a global parameter or as part of a MAP statement. You can handle most errors in a default fashion (for example, to cease processing) with DEFAULT and DEFAULT2 options, and also handle other errors in a specific manner.
The following comprise the range of REPERROR responses:
  • ABEND: roll back the transaction and stop processing.
  • DISCARD: log the error to the discard file and continue processing.
  • EXCEPTION: send the error for exceptions processing. See Section 15.3.1, "Handling Errors as Exceptions" for more information.
  • IGNORE: ignore the error and continue processing.
  • RETRYOP [MAXRETRIES n]: retry the operation, optionally up to a specific number of times.
  • TRANSABORT [, MAXRETRIES n] [, DELAY[C]SECS n]: abort the transaction and reposition to the beginning, optionally up to a specific number of times at specific intervals.
  • RESET: remove all previous REPERROR rules and restore the default of ABEND.
  • TRANSDISCARD: discard the entire replicated source transaction if any operation within that transaction, including the commit, causes a Replicat error that is listed in the error specification. This option is useful when integrity constraint checking is disabled on the target.
  • TRANSEXCEPTION: perform exceptions mapping for every record in the replicated source transaction, according to its exceptions-mapping statement, if any operation within that transaction (including the commit) causes a Replicat error that is listed in the error specification.
Most options operate on the individual record that generated an error, and Replicat processes the other, successful operations in the transaction. The exceptions are TRANSDISCARD and TRANSEXCEPTION: These options affect all records in a transaction if any record in that transaction generates an error. (The ABEND option also applies to the entire transaction, but does not apply error handling.)
See Reference for Oracle GoldenGate for Windows and UNIX for REPERROR syntax and usage.

15.3.1 Handling Errors as Exceptions

When the action of REPERROR is EXCEPTION or TRANSEXCEPTION, you can map the values of operations that generate errors to an exceptions table and, optionally, map other information about the error that can be used to resolve the error. See Section 15.3.1.3, "About the Exceptions Table".
To map the exceptions to the exceptions table, use either of the following options of the MAP parameter:
  • MAP with EXCEPTIONSONLY
  • MAP with MAPEXCEPTION

15.3.1.1 Using EXCEPTIONSONLY

EXCEPTIONSONLY is valid for one pair of source and target tables that are explicitly named and mapped one-to-one in a MAP statement; that is, there cannot be wildcards. To use EXCEPTIONSONLY, create two MAP statements for each source table that you want to use EXCEPTIONSONLY for on the target:
  • The first, a standard MAP statement, maps the source table to the actual target table.
  • The second, an exceptions MAP statement, maps the source table to the exceptions table (instead of to the target table). An exceptions MAPstatement executes immediately after an error on the source table to send the row values to the exceptions table.
    To identify a MAP statement as an exceptions MAP statement, use the INSERTALLRECORDS and EXCEPTIONSONLY options. The exceptions MAPstatement must immediately follow the regular MAP statement that contains the same source table. Use a COLMAP clause in the exceptions MAPstatement if the source and exceptions-table columns are not identical, or if you want to map additional information to extra columns in the exceptions table, such as information that is captured by means of column-conversion functions or SQLEXEC.
For more information about these parameters, see Reference for Oracle GoldenGate for Windows and UNIX.
Example 15-1 EXCEPTIONSONLY
This example shows how to use REPERROR with EXCEPTIONSONLY and an exceptions MAP statement. This example only shows the parameters that relate to REPERROR; other parameters not related to error handling are also required for Replicat.
REPERROR (DEFAULT, EXCEPTION)
MAP ggs.equip_account, TARGET ggs.equip_account2,
COLMAP (USEDEFAULTS);
MAP ggs.equip_account, TARGET ggs.equip_account_exception,
EXCEPTIONSONLY,
INSERTALLRECORDS
COLMAP (USEDEFAULTS,
DML_DATE = @DATENOW (),
OPTYPE = @GETENV ('LASTERR', 'OPTYPE'),
DBERRNUM = @GETENV ('LASTERR', 'DBERRNUM'),
DBERRMSG = @GETENV ('LASTERR', 'DBERRMSG'));
In this example, the REPERROR parameter is set for DEFAULT error handling, and the EXCEPTION option causes the Replicat process to treat failed operations as exceptions and continue processing.
There are two MAP statements:
  • A regular MAP statement that maps the source table ggs.equip_account to its target table equip_account2.
  • An exceptions MAP statement that maps the same source table to the exceptions table ggs.equip_account_exception.
In this case, four extra columns were created, in addition to the same columns that the table itself contains:
DML_DATE
OPTYPE
DBERRNUM
DBERRMSG
To populate the DML_DATE column, the @DATENOW column-conversion function is used to get the date and time of the failed operation, and the result is mapped to the column. To populate the other extra columns, the @GETENV function is used to return the operation type, database error number, and database error message.
The EXCEPTIONSONLY option of the exceptions MAP statement causes the statement to execute only after a failed operation on the source table. It prevents every operation from being logged to the exceptions table.
The INSERTALLRECORDS parameter causes all failed operations for the specified source table, no matter what the operation type, to be logged to the exceptions table as inserts.
Note:
There can be no primary key or unique index restrictions on the exception table. Uniqueness violations are possible in this scenario and would generate errors.

15.3.1.2 Using MAPEXCEPTION

MAPEXCEPTION is valid when the names of the source and target tables in the MAP statement are wildcarded. Place the MAPEXCEPTION clause in the regular MAP statement, the same one where you map the source tables to the target tables. Replicat maps all operations that generate errors from all of the wildcarded tables to the same exceptions table; therefore, the exceptions table should contain a superset of all of the columns in all of the wildcarded tables.
Because you cannot individually map columns in a wildcard configuration, use the COLMAP clause with the USEDEFAULTS option to handle the column mapping for the wildcarded tables (or use the COLMATCH parameter if appropriate), and use explicit column mappings to map any additional information, such as that captured with column-conversion functions or SQLEXEC.
When using MAPEXCEPTION, include the INSERTALLRECORDS parameter in the MAPEXCEPTION clause. INSERTALLRECORDS causes all operation types to be applied to the exceptions table as INSERT operations. This is required to keep an accurate record of the exceptions and to prevent integrity errors on the exceptions table.
For more information about these parameters, see Reference for Oracle GoldenGate for Windows and UNIX.
Example 15-2 MAPEXCEPTION
This is an example of how to use MAPEXCEPTION for exceptions mapping. The MAP and TARGET clauses contain wildcarded source and target table names. Exceptions that occur when processing any table with a name beginning with TRX are captured to the fin.trxexceptions table using the designated mapping.
MAP src.trx*, TARGET trg.*,
MAPEXCEPTION (TARGET fin.trxexceptions,
INSERTALLRECORDS,
COLMAP (USEDEFAULTS,
ACCT_NO = ACCT_NO,
OPTYPE = @GETENV ('LASTERR', 'OPTYPE'),
DBERR = @GETENV ('LASTERR', 'DBERRNUM'),
DBERRMSG = @GETENV ('LASTERR', 'DBERRMSG')
)
);

15.3.1.3 About the Exceptions Table

Use an exceptions table to capture information about an error that can be used for such purposes as troubleshooting your applications or configuring them to handle the error. At minimum, an exceptions table should contain enough columns to receive the entire row image from the failed operation. You can define extra columns to contain other information that is captured by means of column-conversion functions, SQLEXEC, or other external means.
To ensure that the trail record contains values for all of the columns that you map to the exceptions table, you can use either the LOGALLSUPCOLSparameter or the following parameters in the Extract parameter file:
  • Use the NOCOMPRESSDELETES parameter so that all columns of a row are written to the trail for DELETE operations.
  • Use the GETUPDATEBEFORES parameter so that Extract captures the before image of a row and writes them to the trail.
For more information about these parameters, see Reference for Oracle GoldenGate for Windows and UNIX.

15.4 Handling Replicat errors during DDL Operations

To control the way that Replicat responds to an error that occurs for a DDL operation on the target, use the DDLERROR parameter in the Replicat parameter file. For more information, see Reference for Oracle GoldenGate for Windows and UNIX.

15.5 Handling TCP/IP Errors

To provide instructions for responding to TCP/IP errors, use the TCPERRS file. This file is in the Oracle GoldenGate directory
Example 15-3 TCPERRS File
# TCP/IP error handling parameters
# Default error response is abend
#
# Error          Response   Delay(csecs)  Max Retries

ECONNABORTED     RETRY      1000          10
ECONNREFUSED     RETRY      1000          12
ECONNRESET       RETRY      500           10
ENETDOWN         RETRY      3000          50
ENETRESET        RETRY      1000          10
ENOBUFS          RETRY      100           60
ENOTCONN         RETRY      100           10
EPIPE            RETRY      500           10
ESHUTDOWN        RETRY      1000          10
ETIMEDOUT        RETRY      1000          10
NODYNPORTS       RETRY      100           10
The TCPERRS file contains default responses to basic errors. To alter the instructions or add instructions for new errors, open the file in a text editor and change any of the values in the columns shown in Table 15-1:
Table 15-1 TCPERRS Columns
ColumnDescription
Error
Specifies a TCP/IP error for which you are defining a response.
Response
Controls whether or not Oracle GoldenGate tries to connect again after the defined error. Valid values are either RETRY or ABEND.
Delay
Controls how long Oracle GoldenGate waits before attempting to connect again.
Max Retries
Controls the number of times that Oracle GoldenGate attempts to connect again before aborting.
If a response is not explicitly defined in the TCPERRS file, Oracle GoldenGate responds to TCP/IP errors by abending.

15.6 Maintaining Updated Error Messages

The error, information, and warning messages that Oracle GoldenGate processes generate are stored in a data file named ggmessage.dat in the Oracle GoldenGate installation directory. The version of this file is checked upon process startup and must be identical to that of the process in order for the process to operate.

15.7 Resolving Oracle GoldenGate Errors

To get help with specific troubleshooting issues, go to My Oracle Support at http://support.oracle.comOpens a new window and search the Knowledge Base.

Thursday, December 24, 2015

OGG : Oracle LOGDUMP USAGE !

https://docs.oracle.com/goldengate/1212/gg-winux/GLOGD/wu_logdump.htm#GLOGD115
http://www.ateam-oracle.com/oracle-goldengate-logdump/

Oracle GoldenGate: Logdump

Introduction
Logdump is a standard utility that comes with the Oracle GoldenGate (OGG) application. Logdump enables the user to view and scan for transactions and records, obtain transaction statistics, and save data contained within OGG Trails. In this article we shall present examples of the Oracle GoldenGate Log File Dump Utility version 12.1.2 functionality that aid in troubleshooting data issues, determining database recovery points, and OGG group load balancing.
Main Article
To fully comprehend the data available via Logdump, one must first know something about the OGG Trail and its record layout. The following diagram provides a simplified view of an OGG Trail.
OGGTrail_Ex
The File Header contains a wealth of information about the physical environment that produced the Trail file and Trail file contents. This information includes:
GroupIDTokens
Trail InfoSignature, Compatibility, Character Set, Creation Time, URI, URI History, File Name, Multi-part, Trail Sequence Number, File Size, First CSN, Last CSN, First IO Time, Last IO Time, Log BSN
Machine InfoOperating System Name, Node Name, Release, Version, Hardware
Database InfoVendor, Name, Character Set, Major Version, Minor Version, Version String, Client Character Set, Client Version String, Client NCharacter Set, Db Locale, Db NCharacter Set, Db Object Metadata, Db Time Zone
Producer InfoName, Data Source, Major Version, Minor Version, Maintenance Level, Bug Fix Level, Build Number, Version String
Continuity InfoRecovery Mode
Each source operation captured is encapsulated into a unique transaction record. This record consists of a GoldenGate Header (GHDR), variable length data area, and GGS and User Tokens.

GHDR

The GoldenGate Header, or GHDR, area details information about the source operation and transaction. This includes the source commit timestamp (IO Time token), record length, database operation type (IO Type), whether this record is a before or after data image, and source table name. The GHDR also contains information that denotes each record as the beginning record of a transaction, the ending record of a transaction, a record between the beginning and end of a transaction, or a single record transaction.

Data Area

The variable length data area contains the atomic data for the source database operation. This includes the column index number and data changed for an after image, or original column data for a before image.

GGS Tokens

GoldenGate Tokens written for each transaction record include the source database row id, source database change number (LOGCSN), and source database transaction id (TRANID).

User Tokens

User Tokens are data defined by the user and added to the Trail via the Extract @TOKEN column conversion function.
Now that we know the basic layout for an OGG Trail, lets see what information we can view via Logdump.

 Getting Started With Logdump

Logdump is run from a terminal window. When the application starts, a banner is displayed showing the version and copyright information, followed by the Logdump prompt. The command open [trail] will create a shared read file handler to the designated trail file.
[lpenton@oelora114 OGG_East]$ ./logdump

Oracle GoldenGate Log File Dump Utility for Oracle
Version 12.1.2.0.0 17185003 OGGCORE_12.1.2.0.0_PLATFORMS_130924.1316

Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved.

Logdump 883 >open ./dirdat/pe000038
Current LogTrail is /home/lpenton/OGG/OGG_East/dirdat/pe000038
You may notice in the above example the number “883” in the Logdump prompt. Logdump maintains a history buffer of all commands issued. There are 883 items in my Logdump command buffer which may be viewed or re-executed via the helpfc, and ! commands. To learn more about the usage of these commands, refer to my blog article Oracle GoldenGate: Using the GGSCI !, History, and FC Commands. The article is written using GGSCI, but the command function and use is the same for Logdump.

Getting Record Counts

The command count will return data for all records contain in the trail. This data includes, total data bytes, average data bytes per record, totals for each type of database operation, the number of before and after image records, average number of transactions in the Trail, average data bytes per transaction, and average number of records per transaction.
Logdump 884 >count
LogTrail /home/lpenton/OGG/OGG_East/dirdat/pe000038 has 227838 records 
Total Data Bytes          22672115 
  Avg Bytes/Record              99 
Insert                       76626 
FieldComp                   145759 
GGSPKUpdate                   5437 
RestartOK                        1 
DDL                             14 
Others                           1 
Before Images                75598 
After Images                152239 

Average of 146209 Transactions 
    Bytes/Trans .....        229 
    Records/Trans ...          1 
    Files/Trans .....          0
This is great as an overview, but it does not provide any level of real detail about the Extract’s workload. The command detail on tells Logdump to display the totals and averages shown above and then sort the data and display it for each unique table contained within the Trail.
Logdump 885 >detail on
Logdump 886 >count
LogTrail /home/lpenton/OGG/OGG_East/dirdat/pe000038 has 227838 records 
Total Data Bytes          22672115 
  Avg Bytes/Record              99 
Insert                       76626 
FieldComp                   145759 
GGSPKUpdate                   5437 
RestartOK                        1 
DDL                             14 
Others                           1 
Before Images                75598 
After Images                152239 

Average of 146209 Transactions 
    Bytes/Trans .....        229 
    Records/Trans ...          1 
    Files/Trans .....          0 

                                                   Partition 0 
Total Data Bytes             16256 
  Avg Bytes/Record            1083 
RestartOK                        1 
DDL                             14 
After Images                    15 

*FileHeader*                                       Partition 0 
Total Data Bytes              1455 
  Avg Bytes/Record            1455 
Others                           1 

EAST.CATEGORIES                                    Partition 4 
Total Data Bytes              3023 
  Avg Bytes/Record             143 
Insert                          21 
After Images                    21 

EAST.CATEGORIES_DESCRIPTION                        Partition 4 
Total Data Bytes               930 
  Avg Bytes/Record              44 
Insert                          21 
After Images                    21 

EAST.CUSTOMERS                                     Partition 4 
Total Data Bytes             63999 
  Avg Bytes/Record             144 
Insert                         443 
After Images                   443 

EAST.CUSTOMERS_INFO                                Partition 4 
Total Data Bytes            640059 
  Avg Bytes/Record              66 
Insert                         443 
FieldComp                     9126 
Before Images                 4563 
After Images                  5006 

EAST.CUSTOMERS_LKUP                                Partition 4 
Total Data Bytes              3326 
  Avg Bytes/Record              92 
Insert                          36 
After Images                    36 

EAST.NEXT_CUST                                     Partition 4 
Total Data Bytes             19242 
  Avg Bytes/Record              21 
Insert                           1 
FieldComp                      437 
GGSPKUpdate                    437 
Before Images                  437 
After Images                   438 

EAST.NEXT_ORDER                                    Partition 4 
Total Data Bytes            220014 
  Avg Bytes/Record              21 
Insert                           1 
FieldComp                     5000 
GGSPKUpdate                   5000 
Before Images                 5000 
After Images                  5001 

EAST.ORDERS                                        Partition 4 
Total Data Bytes           3480300 
  Avg Bytes/Record             695 
Insert                        5001 
After Images                  5001 

EAST.ORDERS_PRODUCTS                               Partition 4 
Total Data Bytes           9274325 
  Avg Bytes/Record             141 
Insert                       65570 
After Images                 65570 

EAST.ORDERS_STATUS_HISTORY                         Partition 4 
Total Data Bytes            660239 
  Avg Bytes/Record             131 
Insert                        5002 
After Images                  5002 

EAST.ORDERS_TOTAL                                  Partition 4 
Total Data Bytes               343 
  Avg Bytes/Record             114 
Insert                           3 
After Images                     3 

EAST.PRODUCTS                                      Partition 4 
Total Data Bytes           8270694 
  Avg Bytes/Record              63 
Insert                          28 
FieldComp                   131196 
Before Images                65598 
After Images                 65626 

EAST.PRODUCTS_DESCRIPTION                          Partition 4 
Total Data Bytes             17126 
  Avg Bytes/Record             611 
Insert                          28 
After Images                    28 

EAST.PRODUCTS_TO_CATEGORIES                        Partition 4 
Total Data Bytes               784 
  Avg Bytes/Record              28 
Insert                          28 
After Images                    28
If you are new to OGG you maybe confused by the database operation types FieldComp and GGSPkUpdate.
FieldComp identifies records where a Compressed Update operation was written to the source database transaction log. Because most databases only write the primary key or table index data, column index, and change data to the transaction log, the majority of Update operations will be identified as FieldComp in the Trail.
GGSPkUpdate identifies records identified as a Primary Key Update in the database transaction log.

View File Header

The command fileheader detail will set Logdump to display each file header token when the header record is encountered.
To view the file header record, I want to first ensure Logdump is positioned at the beginning of the Trail. The command pos 0, tells Logdump to position to relative byte address, or RBA,  0 within the file. RBA 0 is the beginning of the file.
We then execute the command fileheader detail to set the display option and then the command n, which is shorthand fornext, to display the next Trail record.
Logdump 893 >pos 0
Reading forward from RBA 0 
Logdump 894 >fileheader detail
Logdump 895 >n

2014/03/17 08:30:56.729.864 FileHeader           Len  1455 RBA 0 
Name: *FileHeader* 
 3000 0341 3000 0008 4747 0d0a 544c 0a0d 3100 0002 | 0..A0...GG..TL..1...  
 0004 3200 0004 2000 0000 3300 0008 02f2 1b00 0c18 | ..2... ...3.........  
 b308 3400 0032 0030 7572 693a 6f65 6c6f 7261 3131 | ..4..2.0uri:oelora11  
 343a 3a68 6f6d 653a 6c70 656e 746f 6e3a 4f47 473a | 4::home:lpenton:OGG:  
 4f47 475f 4561 7374 3a50 5f49 4541 5354 3500 0036 | OGG_East:P_IEAST5..6  
 3500 0032 0030 7572 693a 6f65 6c6f 7261 3131 343a | 5..2.0uri:oelora114:  
 3a68 6f6d 653a 6c70 656e 746f 6e3a 4f47 473a 4f47 | :home:lpenton:OGG:OG  

GroupID x30 '0' TrailInfo        Info x00  Length  833 
 3000 0341 3000 0008 4747 0d0a 544c 0a0d 3100 0002 | 0..A0...GG..TL..1...  
 0004 3200 0004 2000 0000 3300 0008 02f2 1b00 0c18 | ..2... ...3.........  
 b308 3400 0032 0030 7572 693a 6f65 6c6f 7261 3131 | ..4..2.0uri:oelora11  
 343a 3a68 6f6d 653a 6c70 656e 746f 6e3a 4f47 473a | 4::home:lpenton:OGG:  
 4f47 475f 4561 7374 3a50 5f49 4541 5354 3500 0036 | OGG_East:P_IEAST5..6  
 3500 0032 0030 7572 693a 6f65 6c6f 7261 3131 343a | 5..2.0uri:oelora114:  
 3a68 6f6d 653a 6c70 656e 746f 6e3a 4f47 473a 4f47 | :home:lpenton:OGG:OG  
 475f 4561 7374 3a49 455f 4541 5354 3600 0013 0011 | G_East:IE_EAST6.....  
 2e2f 6469 7264 6174 2f70 6530 3030 3033 3837 0000 | ./dirdat/pe0000387..  
 0101 3800 0004 0000 0026 3900 0008 0000 0000 02d8 | ..8......&9.........  
 cd38 3a00 0081 0735 3837 3535 3337 0000 0000 0000 | .8:....5875537......  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 003b 0000 8107 | ...............;....  
 3631 3434 3230 3000 0000 0000 0000 0000 0000 0000 | 6144200.............  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 3c00 0008 02f2 1b00 126e 40c0 | ........<........n@.  
 3d00 0008 02f2 1b00 21cf 0918 3eff 0081 0000 0000 | =.......!...>.......  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 003f ff00 0800 0000 0000 0000 0040 ff00 | .....?...........@..  
 8100 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 41ff 0008 0000 0000 0000 | ..........A.........  
 0000 4200 0033 0000 2c2e 2e2e 2e2e 2e2e 2e2e 2e2e | ..B..3..,...........  
 2e2e 2e2e 2e2e 2e2e 2e2e 2e2e 2e2e 2e2e 2e2e 2e2e | ....................  
 2e2e 2e2e 2e2e 2e2e 2e2e 2e2e 2e                  | .............  
TokenID x30 '0' Signature        Info x00  Length    8 
 4747 0d0a 544c 0a0d                               | GG..TL..  
TokenID x31 '1' Compatibility    Info x00  Length    2 
 0004                                              | ..  
TokenID x32 '2' Charset          Info x00  Length    4 
 2000 0000                                         |  ...  
TokenID x33 '3' CreationTime     Info x00  Length    8 
 02f2 1b00 0c18 b308                               | ........  
TokenID x34 '4' URI              Info x00  Length   50 
 0030 7572 693a 6f65 6c6f 7261 3131 343a 3a68 6f6d | .0uri:oelora114::hom  
 653a 6c70 656e 746f 6e3a 4f47 473a 4f47 475f 4561 | e:lpenton:OGG:OGG_Ea  
 7374 3a50 5f49 4541 5354                          | st:P_IEAST  
TokenID x35 '5' URIHistory       Info x00  Length   54 
 3500 0032 0030 7572 693a 6f65 6c6f 7261 3131 343a | 5..2.0uri:oelora114:  
 3a68 6f6d 653a 6c70 656e 746f 6e3a 4f47 473a 4f47 | :home:lpenton:OGG:OG  
 475f 4561 7374 3a49 455f 4541 5354                | G_East:IE_EAST  
TokenID x36 '6' Filename         Info x00  Length   19 
 0011 2e2f 6469 7264 6174 2f70 6530 3030 3033 38   | .../dirdat/pe000038  
TokenID x37 '7' MultiPart        Info x00  Length    1 
 01                                                | .  
TokenID x38 '8' Seqno            Info x00  Length    4 
 0000 0026                                         | ...&  
TokenID x39 '9' FileSize         Info x00  Length    8 
 0000 0000 02d8 cd38                               | .......8  
TokenID x3a ':' FirstCSN         Info x00  Length  129 
 0735 3837 3535 3337 0000 0000 0000 0000 0000 0000 | .5875537............  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 00                            | .........  
TokenID x3b ';' LastCSN          Info x00  Length  129 
 0736 3134 3432 3030 0000 0000 0000 0000 0000 0000 | .6144200............  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 00                            | .........  
TokenID x3c '<' FirstIOTime      Info x00  Length    8 
 02f2 1b00 126e 40c0                               | .....n@.  
TokenID x3d '=' LastIOTime       Info x00  Length    8 
 02f2 1b00 21cf 0918                               | ....!...  
TokenID x3e '>' LOGBSN           Info xff  Length  129 
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 00                            | .........  
TokenID x3f '?' BITFLAGS         Info xff  Length    8 
 0000 0000 0000 0000                               | ........  
TokenID x40 '@' UNUSEDSEQUENCE   Info xff  Length  129 
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 00                            | .........  
TokenID x41 'A' UNUSEDINT64      Info xff  Length    8 
 0000 0000 0000 0000                               | ........  
TokenID x42 'B' TrailEncryption  Info x00  Length   51 
 0000 2c2e 2e2e 2e2e 2e2e 2e2e 2e2e 2e2e 2e2e 2e2e | ..,.................  
 2e2e 2e2e 2e2e 2e2e 2e2e 2e2e 2e2e 2e2e 2e2e 2e2e | ....................  
 2e2e 2e2e 2e2e 2e00 0000 00                       | ...........  

GroupID x31 '1' MachineInfo      Info x00  Length  107 
 3100 006b 3000 0007 0005 4c69 6e75 7831 0000 0b00 | 1..k0.....Linux1....  
 096f 656c 6f72 6131 3134 3200 0018 0016 322e 362e | .oelora1142.....2.6.  
 3332 2d34 3030 2e32 392e 322e 656c 3575 656b 3300 | 32-400.29.2.el5uek3.  
 0025 0023 2331 2053 4d50 2057 6564 204a 756c 2031 | .%.##1 SMP Wed Jul 1  
 3720 3130 3a35 383a 3336 2050 4454 2032 3031 3334 | 7 10:58:36 PDT 20134  
 0000 0800 0678 38                                 | .....x8  
TokenID x30 '0' Sysname          Info x00  Length    7 
 0005 4c69 6e75 78                                 | ..Linux  
TokenID x31 '1' Nodename         Info x00  Length   11 
 0009 6f65 6c6f 7261 3131 34                       | ..oelora114  
TokenID x32 '2' Release          Info x00  Length   24 
 0016 322e 362e 3332 2d34 3030 2e32 392e 322e 656c | ..2.6.32-400.29.2.el  
 3575 656b                                         | 5uek  
TokenID x33 '3' Version          Info x00  Length   37 
 0023 2331 2053 4d50 2057 6564 204a 756c 2031 3720 | .##1 SMP Wed Jul 17   
 3130 3a35 383a 3336 2050 4454 2032 3031 33        | 10:58:36 PDT 2013  
TokenID x34 '4' Hardware         Info x00  Length    8 
 0006 7838 365f 3634                               | ..x86_64  

GroupID x32 '2' DatabaseInfo     Info x00  Length  360 
 3200 0168 3000 0002 0007 3100 0006 0004 4f52 434c | 2..h0.....1.....ORCL  
 3200 0006 0004 6f72 636c 3300 0004 0000 27e0 3400 | 2.....orcl3.....'.4.  
 0002 000b 3500 0002 0002 3600 00e7 00e5 4f72 6163 | ....5.....6.....Orac  
 6c65 2044 6174 6162 6173 6520 3131 6720 456e 7465 | le Database 11g Ente  
 7270 7269 7365 2045 6469 7469 6f6e 2052 656c 6561 | rprise Edition Relea  
 7365 2031 312e 322e 302e 342e 3020 2d20 3634 6269 | se 11.2.0.4.0 - 64bi  
 7420 5072 6f64 7563 7469 6f6e 0a50 4c2f 5351 4c20 | t Production.PL/SQL   
 5265 6c65 6173 6520 3131 2e32 2e30 2e34 2e30 202d | Release 11.2.0.4.0 -  
 2050 726f 6475 6374 696f 6e0a 434f 5245 0931 312e |  Production.CORE.11.  
 322e 302e 342e 3009 5072 6f64 7563 7469 6f6e 0a54 | 2.0.4.0.Production.T  
 4e53 2066 6f72 204c 696e 7578 3a20 5665 7273 696f | NS for Linux: Versio  
 6e20 3131 2e32 2e30 2e34 2e30 202d 2050 726f 6475 | n 11.2.0.4.0 - Produ  
 6374 696f 6e0a 4e4c 5352 544c 2056 6572 7369 6f6e | ction.NLSRTL Version  
 2031 312e 322e 302e 342e 3020 2d20 5072 6f64 7563 |  11.2.0.4.0 - Produc  
 7469 6f6e 0a37 0000 0440 0000 0038 0000 0c00 0a31 | tion.7...@...8.....1  
 312e 322e 302e 342e 3039 0000 0400 0000 013a 0000 | 1.2.0.4.09.......:..  
 0200 003b 0000 0400 0000 013c 0000 1400 0000 1014 | ...;.......<........  
 1414 1414 1414 1414 1414 1411 1414 143d 0000 0500 | ...............=....  
TokenID x30 '0' Vendor           Info x00  Length    2 
 0007                                              | ..  
TokenID x31 '1' Name             Info x00  Length    6 
 0004 4f52 434c                                    | ..ORCL  
TokenID x32 '2' Instance         Info x00  Length    6 
 0004 6f72 636c                                    | ..orcl  
TokenID x33 '3' Charset          Info x00  Length    4 
 0000 27e0                                         | ..'.  
TokenID x34 '4' MajorVersion     Info x00  Length    2 
 000b                                              | ..  
TokenID x35 '5' MinorVersion     Info x00  Length    2 
 0002                                              | ..  
TokenID x36 '6' VerString        Info x00  Length  231 
 00e5 4f72 6163 6c65 2044 6174 6162 6173 6520 3131 | ..Oracle Database 11  
 6720 456e 7465 7270 7269 7365 2045 6469 7469 6f6e | g Enterprise Edition  
 2052 656c 6561 7365 2031 312e 322e 302e 342e 3020 |  Release 11.2.0.4.0   
 2d20 3634 6269 7420 5072 6f64 7563 7469 6f6e 0a50 | - 64bit Production.P  
 4c2f 5351 4c20 5265 6c65 6173 6520 3131 2e32 2e30 | L/SQL Release 11.2.0  
 2e34 2e30 202d 2050 726f 6475 6374 696f 6e0a 434f | .4.0 - Production.CO  
 5245 0931 312e 322e 302e 342e 3009 5072 6f64 7563 | RE.11.2.0.4.0.Produc  
 7469 6f6e 0a54 4e53 2066 6f72 204c 696e 7578 3a20 | tion.TNS for Linux:   
 5665 7273 696f 6e20 3131 2e32 2e30 2e34 2e30 202d | Version 11.2.0.4.0 -  
 2050 726f 6475 6374 696f 6e0a 4e4c 5352 544c 2056 |  Production.NLSRTL V  
 6572 7369 6f6e 2031 312e 322e 302e 342e 3020 2d20 | ersion 11.2.0.4.0 -   
 5072 6f64 7563 7469 6f6e 0a                       | Production.  
TokenID x37 '7' ClientCharset    Info x00  Length    4 
 4000 0000                                         | @...  
TokenID x38 '8' ClientVerString  Info x00  Length   12 
 000a 3131 2e32 2e30 2e34 2e30                     | ..11.2.0.4.0  
TokenID x39 '9' ClientNCharset   Info x00  Length    4 
 0000 0001                                         | ....  
TokenID x3a ':' DbLocale         Info x00  Length    2 
 0000                                              | ..  
TokenID x3b ';' DbNCharset       Info x00  Length    4 
 0000 0001                                         | ....  
TokenID x3c '<' DbObjNameMetadata Info x00  Length   20 
 0000 0010 1414 1414 1414 1414 1414 1414 1114 1414 | ....................  
TokenID x3d '=' DbTimeZone       Info x00  Length    5 
 0003 474d 54                                      | ..GMT  

GroupID x33 '3' ProducerInfo     Info x00  Length  127 
 3300 007f 3000 0009 0007 4945 5f45 4153 5431 0000 | 3...0.....IE_EAST1..  
 0200 0332 0000 0200 0c33 0000 0200 0134 0000 0200 | ...2.....3.....4....  
 0235 0000 0200 0036 0000 0200 0037 0000 4a00 4856 | .5.....6.....7..J.HV  
 6572 7369 6f6e 2031 322e 312e 322e 302e 3020 3137 | ersion 12.1.2.0.0 17  
 3138 3530 3033 204f 4747 434f 5245 5f31 322e 312e | 185003 OGGCORE_12.1.  
 322e 302e 305f 504c 4154 464f 524d 535f 3133 3039 | 2.0.0_PLATFORMS_1309  
 3234 2e31 3331 36                                 | 24.1316  
TokenID x30 '0' Name             Info x00  Length    9 
 0007 4945 5f45 4153 54                            | ..IE_EAST  
TokenID x31 '1' DataSource       Info x00  Length    2 
 0003                                              | ..  
TokenID x32 '2' MajorVersion     Info x00  Length    2 
 000c                                              | ..  
TokenID x33 '3' MinorVersion     Info x00  Length    2 
 0001                                              | ..  
TokenID x34 '4' MaintLevel       Info x00  Length    2 
 0002                                              | ..  
TokenID x35 '5' BugFixLevel      Info x00  Length    2 
 0000                                              | ..  
TokenID x36 '6' BuildNumber      Info x00  Length    2 
 0000                                              | ..  
TokenID x37 '7' VerString        Info x00  Length   74 
 0048 5665 7273 696f 6e20 3132 2e31 2e32 2e30 2e30 | .HVersion 12.1.2.0.0  
 2031 3731 3835 3030 3320 4f47 4743 4f52 455f 3132 |  17185003 OGGCORE_12  
 2e31 2e32 2e30 2e30 5f50 4c41 5446 4f52 4d53 5f31 | .1.2.0.0_PLATFORMS_1  
 3330 3932 342e 3133 3136 5f46 424f                | 30924.1316_FBO  

GroupID x34 '4' ContinunityInfo  Info x00  Length    8 
 3400 0008 3000 0004                               | 4...0...  
TokenID x30 '0' RecoveryMode     Info x00  Length    4 
 0000 0001                                         | ....
In the file header record above, you can see a lot of information about my test environment and database. Take particular note of these two tokens:
TokenID x3a ':' FirstCSN         Info x00  Length  129 
 0735 3837 3535 3337 0000 0000 0000 0000 0000 0000 | .5875537............  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 00                            | .........  
TokenID x3b ';' LastCSN          Info x00  Length  129 
 0736 3134 3432 3030 0000 0000 0000 0000 0000 0000 | .6144200............  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 | ....................  
 0000 0000 0000 0000 00                            | .........
The tokens FirstCSN and LastCSN identify the beginning and ending CSN (SCN for Oracle Databases) for transaction records in this trail. This may come in handy should you ever need to perform a database recovery or start an OGG Group that reads the Trail at a specific CSN.
The LastCSN token is only populated when a Trail has been closed by its Extract. Trails are closed only when an Extract opens and begins writing to a new trail. If an Extract is in the STOPPED or ABEND state, or you are viewing the most current Trail; this token will contain no data.

 View GHDR

The command ghdr on will tell Logdump to display the GoldenGate Header for each record in the Trail. The command sfh, which is shorthand for scanforheader, will read through the trail and display the first record with a GoldenGate Header.
Logdump 904 >ghdr on
Logdump 905 >sfh
___________________________________________________________________ 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)  
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)  
RecLength  :   141  (x008d)   IO Time    : 2014/03/25 12:08:43.003.270   
IOType     :     5  (x05)     OrigNode   :   255  (xff) 
TransInd   :     .  (x00)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :         73       AuditPos   : 141483536 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2014/03/25 12:08:43.003.270 Insert               Len   141 RBA 2777 
Name: EAST.CATEGORIES 
After  Image:                                             Partition 4   G  b   
 0000 000a 0000 0000 0000 0000 0001 0001 0019 0000 | ....................  
 0015 6361 7465 676f 7279 5f68 6172 6477 6172 652e | ..category_hardware.  
 6769 6600 0200 0a00 0000 0000 0000 0000 0000 0300 | gif.................  
 0a00 0000 0000 0000 0000 0100 0400 1f00 0032 3031 | .................201  
 312d 3034 2d30 343a 3038 3a33 313a 3030 2e30 3030 | 1-04-04:08:31:00.000  
 3030 3030 3030 0005 001f ffff 3139 3030 2d30 312d | 000000......1900-01-  
 3031 3a30 303a 3030 3a30 302e 3030 3030 3030 3030 | 01:00:00:00.00000000  
Column     0 (x0000), Len    10 (x000a)  
Column     1 (x0001), Len    25 (x0019)  
Column     2 (x0002), Len    10 (x000a)  
Column     3 (x0003), Len    10 (x000a)  
Column     4 (x0004), Len    31 (x001f)  
Column     5 (x0005), Len    31 (x001f)
In the record above, the GHDR is the data below the solid line and above the after image line. The GHDR identifies this record as an After Image (BeforeAfter), is 141 bytes in length (RecLength), was committed in the source database at on 2014/03/25 at 12:08:43.003270 (IO Time), is the start of a multi-record transaction (TransInd 0), and is an insert operation (IOType) to the table EAST.CATEGORIES (Name).
The remaining data is the hexadecimal representation of the inserted record and table column index information. From this view of the data we cannot determine much about this transaction record, so lets display two additional options.

View Data Details and GGS Tokens

Lets look at the record shown above again, but this time we will tell Logdump to display the GGS Tokens and additional details about the transaction record.
First, we need to reposition Logdump’s read pointer to a location prior to the record. In the record details shown above, we see that the starting point for this record is RBA 2777. The command pos 2776 will place the read pointer to byte offset 2,776 in the Trail file. The command detail data will tell Logdump to display details about the data in this record, and the commandggstoken detail will cause Logdump to display data from the GGS Token area. Then we sfh to scan for the record header and display the complete record.
Logdump 911 >pos 2776
Reading forward from RBA 2776 
Logdump 912 >detail data
Logdump 913 >ggstoken detail
Logdump 914 >sfh
___________________________________________________________________ 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)  
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)  
RecLength  :   141  (x008d)   IO Time    : 2014/03/25 12:08:43.003.270   
IOType     :     5  (x05)     OrigNode   :   255  (xff) 
TransInd   :     .  (x00)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :         73       AuditPos   : 141483536 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2014/03/25 12:08:43.003.270 Insert               Len   141 RBA 2777 
Name: EAST.CATEGORIES 
After  Image:                                             Partition 4   G  b   
 0000 000a 0000 0000 0000 0000 0001 0001 0019 0000 | ....................  
 0015 6361 7465 676f 7279 5f68 6172 6477 6172 652e | ..category_hardware.  
 6769 6600 0200 0a00 0000 0000 0000 0000 0000 0300 | gif.................  
 0a00 0000 0000 0000 0000 0100 0400 1f00 0032 3031 | .................201  
 312d 3034 2d30 343a 3038 3a33 313a 3030 2e30 3030 | 1-04-04:08:31:00.000  
 3030 3030 3030 0005 001f ffff 3139 3030 2d30 312d | 000000......1900-01-  
 3031 3a30 303a 3030 3a30 302e 3030 3030 3030 3030 | 01:00:00:00.00000000  
Column     0 (x0000), Len    10 (x000a)  
 0000 0000 0000 0000 0001                          | ..........  
Column     1 (x0001), Len    25 (x0019)  
 0000 0015 6361 7465 676f 7279 5f68 6172 6477 6172 | ....category_hardwar  
 652e 6769 66                                      | e.gif  
Column     2 (x0002), Len    10 (x000a)  
 0000 0000 0000 0000 0000                          | ..........  
Column     3 (x0003), Len    10 (x000a)  
 0000 0000 0000 0000 0001                          | ..........  
Column     4 (x0004), Len    31 (x001f)  
 0000 3230 3131 2d30 342d 3034 3a30 383a 3331 3a30 | ..2011-04-04:08:31:0  
 302e 3030 3030 3030 3030 30                       | 0.000000000  
Column     5 (x0005), Len    31 (x001f)  
 ffff 3139 3030 2d30 312d 3031 3a30 303a 3030 3a30 | ..1900-01-01:00:00:0  
 302e 3030 3030 3030 3030 30                       | 0.000000000  

GGS tokens: 
TokenID x52 'R' ORAROWID         Info x00  Length   20 
 4141 4157 476f 4141 4541 4141 4149 4c41 4141 0001 | AAAWGoAAEAAAAILAAA..  
TokenID x4c 'L' LOGCSN           Info x00  Length    7 
 3631 3838 3238 38                                 | 6188288  
TokenID x36 '6' TRANID           Info x00  Length   10 
 3137 2e31 382e 3234 3437                          | 17.18.2447
Once again the GHDR and Data records are displayed, along with the data for each record column, the Oracle Row Id of the source data, the source database CSN for this transaction, and the source database Transaction Id.

 Positioning and Filtering Data

So far we have seen that we can read Trail data by positioning to a byte offset within the Trail and either scanning for the GoldenGate Header, or choosing to select the next record. Lets take this a step further to show more of the advanced functionality in Logdump.
Earlier we displayed a count for the data contained in the trail by source table. Lets use Logdump to locate the last record for the EAST.ORDERS table and then save all of the records with the same CSN to a new Trail file.
First we set a data filter for the table via the command filter inc filename EAST.ORDERS. This tells Logdump to ignore everything except the EAST.ORDERS table. Next, pos eof will position Logdump’s read pointer to the end of the Trail. To tell Logdump to read the Trail backwards, we use the command pos rev. The command n will then have Logdump scan backwards through the trail and display the first record encountered for the EAST.ORDERS table.
Logdump 933 >filter inc filename EAST.ORDERS
Logdump 934 >pos eof
Reading forward from RBA 14469316 
Logdump 935 >pos rev
Reading in reverse from RBA 14469316 
Logdump 936 >n
___________________________________________________________________ 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)  
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)  
RecLength  :   705  (x02c1)   IO Time    : 2014/04/07 10:06:16.000.000   
IOType     :     5  (x05)     OrigNode   :   255  (xff) 
TransInd   :     .  (x03)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :        101       AuditPos   : 223086608 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2014/04/07 10:06:16.000.000 Insert               Len   705 RBA 63547 
Name: EAST.ORDERS 
After  Image:                                             Partition 4   G  s   
 0000 000a 0000 0000 0000 0000 0001 0001 000a 0000 | ....................  
 0000 0000 0000 0001 0002 0010 0000 000c 4c6f 7265 | ................Lore  
 6e20 5065 6e74 6f6e 0003 0004 ffff 0000 0004 0014 | n Penton............  
 0000 0010 3338 3230 2042 7572 6775 6e64 7920 5374 | ....3820 Burgundy St  
 0005 0004 ffff 0000 0006 000f 0000 000b 4e65 7720 | ................New   
 4f72 6c65 616e 7300 0700 0900 0000 0537 3031 3137 | Orleans........70117  
 0008 000d 0000 0009 4c6f 7569 7369 616e 6100 0900 | ........Louisiana...  

GGS tokens: 
TokenID x52 'R' ORAROWID         Info x00  Length   20 
 4141 4157 534c 4141 4541 4141 414a 3141 4141 0001 | AAAWSLAAEAAAAJ1AAA..  
TokenID x4c 'L' LOGCSN           Info x00  Length    7 
 3831 3633 3430 34                                 | 8163404  
TokenID x36 '6' TRANID           Info x00  Length    9 
 312e 3239 2e39 3835 30                            | 1.29.9850  

Filtering suppressed     12 records
The GGS Token LOGCSN shows that the Oracle SCN for this transaction is 8163404. Now lets find all records the source database associated with that CSN.
First, we use the command filter clear all to reset the existing Logdump data filter. Then we set a new filter to scan for all records associated with the desired LOGCSN, filter logcsn 8163404. Tell Logdump to start scanning forward again, pos for, and to reposition to the beginning of the trail, pos 0. The count command will then tell us how many records in this trail have this CSN.
Logdump 937 >pos for
Reading forward from RBA 63547 
Logdump 938 >pos 0
Reading forward from RBA 0 
Logdump 939 >filter reset all
Logdump 940 >filter logcsn 8163404
Logdump 941 >count
LogTrail /home/lpenton/OGG/OGG_East/dirdat/pe000038 has 2 records 
Total Data Bytes              1870 
  Avg Bytes/Record             935 
Insert                           1 
DDL                              1 
After Images                     2 
Filtering matched            2 records 
          suppressed       255 records 

Average of 2 Transactions 
    Bytes/Trans .....        983 
    Records/Trans ...          1 
    Files/Trans .....          1
The count command shows there are two records in the trail associated with this CSN. Executing the n command twice, will show the two records.
Logdump 942 >n
___________________________________________________________________ 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)  
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)  
RecLength  :   705  (x02c1)   IO Time    : 2014/04/07 10:06:16.000.000   
IOType     :     5  (x05)     OrigNode   :   255  (xff) 
TransInd   :     .  (x03)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :        101       AuditPos   : 223086608 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2014/04/07 10:06:16.000.000 Insert               Len   705 RBA 63547 
Name: EAST.ORDERS 
After  Image:                                             Partition 4   G  s   
 0000 000a 0000 0000 0000 0000 0001 0001 000a 0000 | ....................  
 0000 0000 0000 0001 0002 0010 0000 000c 4c6f 7265 | ................Lore  
 6e20 5065 6e74 6f6e 0003 0004 ffff 0000 0004 0014 | n Penton............  
 0000 0010 3338 3230 2042 7572 6775 6e64 7920 5374 | ....3820 Burgundy St  
 0005 0004 ffff 0000 0006 000f 0000 000b 4e65 7720 | ................New   
 4f72 6c65 616e 7300 0700 0900 0000 0537 3031 3137 | Orleans........70117  
 0008 000d 0000 0009 4c6f 7569 7369 616e 6100 0900 | ........Louisiana...  

GGS tokens: 
TokenID x52 'R' ORAROWID         Info x00  Length   20 
 4141 4157 534c 4141 4541 4141 414a 3141 4141 0001 | AAAWSLAAEAAAAJ1AAA..  
TokenID x4c 'L' LOGCSN           Info x00  Length    7 
 3831 3633 3430 34                                 | 8163404  
TokenID x36 '6' TRANID           Info x00  Length    9 
 312e 3239 2e39 3835 30                            | 1.29.9850    

Filtering suppressed    206 records 
Logdump 943 >n
___________________________________________________________________ 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x00)  
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)  
RecLength  :  1165  (x048d)   IO Time    : 2014/04/07 10:06:16.000.000   
IOType     :   160  (xa0)     OrigNode   :     0  (x00) 
TransInd   :     .  (x03)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :          0       AuditPos   : 0 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2014/04/07 10:06:16.000.000 DDLOP                Len  1165 RBA 64366 
Name:  
After  Image:                                             Partition 0   G  s   
 2c43 353d 2730 272c 2c42 373d 2730 272c 2c42 323d | ,C5='0',,B7='0',,B2=  
 2738 3836 3337 272c 2c43 3231 3d27 272c 2c42 333d | '88637',,C21='',,B3=  
 2745 4153 5427 2c2c 4234 3d27 4f52 4445 5253 5f50 | 'EAST',,B4='ORDERS_P  
 524f 4455 4354 5327 2c2c 4331 323d 2745 4153 5427 | RODUCTS',,C12='EAST'  
 2c2c 4331 333d 274f 5244 4552 535f 5052 4f44 5543 | ,,C13='ORDERS_PRODUC  
 5453 272c 2c42 353d 2754 4142 4c45 272c 2c42 363d | TS',,B5='TABLE',,B6=  
 2754 5255 4e43 4154 4527 2c2c 4238 3d27 272c 2c42 | 'TRUNCATE',,B8='',,B  

GGS tokens: 
TokenID x44 'D' DDL              Info x00  Length   24 
 4541 5354 004f 5244 4552 535f 5052 4f44 5543 5453 | EAST.ORDERS_PRODUCTS  
 0000 004e                                         | ...N  
TokenID x4c 'L' LOGCSN           Info x00  Length    7 
 3831 3633 3430 34                                 | 8163404  
TokenID x36 '6' TRANID           Info x00  Length   10 
 3130 2e32 362e 3937 3634                          | 10.26.9764
The Logdump output shows that 206 records were skipped because they were not part of the desired CSN. Now we can use the save command to create a new Trail file that will contain all of the records.

Save Records To New Trail File

To save the records to a new Trail file, first let’s make sure we are at the initial record associated with the CSN. As shown prior, position Logdump at the beginning of the Trail file with the command pos 0 and then scan to the first record via the ncommand. This displays the Insert record for EAST.ORDERS at RBA 63547 in the Trail. To make sure this record is saved to the new Trail, we need to position the Logdump read pointer to the record prior to this one.
Set Logdump to read the Trail in reverse, pos rev, and then scan for the header of the previous record, sfh. This displays the previous record, a DDL operation performed on the EAST.ORDERS table at RBA 62311 in the Trail.
Logdump 944 >pos 0
Reading forward from RBA 0 
Logdump 945 >n
___________________________________________________________________ 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)  
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)  
RecLength  :   705  (x02c1)   IO Time    : 2014/04/07 10:06:16.000.000   
IOType     :     5  (x05)     OrigNode   :   255  (xff) 
TransInd   :     .  (x03)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :        101       AuditPos   : 223086608 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2014/04/07 10:06:16.000.000 Insert               Len   705 RBA 63547 
Name: EAST.ORDERS 
After  Image:                                             Partition 4   G  s   
 0000 000a 0000 0000 0000 0000 0001 0001 000a 0000 | ....................  
 0000 0000 0000 0001 0002 0010 0000 000c 4c6f 7265 | ................Lore  
 6e20 5065 6e74 6f6e 0003 0004 ffff 0000 0004 0014 | n Penton............  
 0000 0010 3338 3230 2042 7572 6775 6e64 7920 5374 | ....3820 Burgundy St  
 0005 0004 ffff 0000 0006 000f 0000 000b 4e65 7720 | ................New   
 4f72 6c65 616e 7300 0700 0900 0000 0537 3031 3137 | Orleans........70117  
 0008 000d 0000 0009 4c6f 7569 7369 616e 6100 0900 | ........Louisiana...  

GGS tokens: 
TokenID x52 'R' ORAROWID         Info x00  Length   20 
 4141 4157 534c 4141 4541 4141 414a 3141 4141 0001 | AAAWSLAAEAAAAJ1AAA..  
TokenID x4c 'L' LOGCSN           Info x00  Length    7 
 3831 3633 3430 34                                 | 8163404  
TokenID x36 '6' TRANID           Info x00  Length    9 
 312e 3239 2e39 3835 30                            | 1.29.9850  

Filtering suppressed    206 records 
Logdump 946 >pos rev
Reading in reverse from RBA 63547 
Logdump 947 >sfh
___________________________________________________________________ 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x00)  
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)  
RecLength  :  1138  (x0472)   IO Time    : 2014/04/07 10:06:16.000.000   
IOType     :   160  (xa0)     OrigNode   :     0  (x00) 
TransInd   :     .  (x03)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :          0       AuditPos   : 0 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2014/04/07 10:06:16.000.000 DDLOP                Len  1138 RBA 62311 
Name:  
After  Image:                                             Partition 0   G  s   
 2c43 353d 2730 272c 2c42 373d 2730 272c 2c42 323d | ,C5='0',,B7='0',,B2=  
 2738 3836 3335 272c 2c43 3231 3d27 272c 2c42 333d | '88635',,C21='',,B3=  
 2745 4153 5427 2c2c 4234 3d27 4f52 4445 5253 272c | 'EAST',,B4='ORDERS',  
 2c43 3132 3d27 4541 5354 272c 2c43 3133 3d27 4f52 | ,C12='EAST',,C13='OR  
 4445 5253 272c 2c42 353d 2754 4142 4c45 272c 2c42 | DERS',,B5='TABLE',,B  
 363d 2754 5255 4e43 4154 4527 2c2c 4238 3d27 272c | 6='TRUNCATE',,B8='',  
 2c42 393d 2745 4153 5427 2c2c 4337 3d27 272c 2c43 | ,B9='EAST',,C7='',,C  

GGS tokens: 
TokenID x44 'D' DDL              Info x00  Length   15 
 4541 5354 004f 5244 4552 5300 0000 4e             | EAST.ORDERS...N  
TokenID x4c 'L' LOGCSN           Info x00  Length    7 
 3831 3633 3336 37                                 | 8163367  
TokenID x36 '6' TRANID           Info x00  Length    9 
 352e 392e 3130 3031 30                            | 5.9.10010  

Logdump 948 >pos for 
Reading forward from RBA 62311 
Logdump 949 >save ./dirdat/zz999999 2 records
Saved 2 records to /home/lpenton/OGG/OGG_East/dirdat/zz999999
From this point in the Trail file, have Logdump read forward via the pos for command. To save the records to a new trail named zz999999, issue the command save ./dirdat/zz999999 2 records. The new Trail file will be created and will contain the header record and the two data records associated with the source database CSN.

Summary

With Logdump you can view data in transactions present for a trail file, identify missing transactions, and troubleshoot replication issues. In this article we presented a small subset of the commands and their possible use; however, there is much more to this utility and every Oracle GoldenGate Administrator should learn and understand this potent tool.