Replace "Listen 7779" with "Listen 80", this time, instead of replacing it, have both sit together one after the other as follows in %ORACLE_HOME%\Apache\Apache\conf\httpd.conf
Listen 7779
Listen 80
Monday, November 29, 2010
FRM-41838 Unable to open temporary record buffer file
One of the our developer encountered the following error
FRM-41838 Unable to open temporary record buffer file C:\DOCUMEN~1\LOCALS~1\Temp\1\s7e8.2
Cause
Oracle Forms tries to create a temporary file if a query is issued and the number of records returned exceeds the block property 'Records Buffered' value. When using OracleAS Forms Services, the temporary file will be written by default to the directory specified for the TMP variable in:
$ORACLE_HOME/opmn/conf/opmn.xml.
If this directory does not exist, has insufficient space or has restricted read / write privileges the FRM-41838 error can occur.
Solution
Solution A:
1. Edit the Forms Listener Servlet env file. This will be the file referenced by the formsweb.cfg applet parameter envFile. By default, envFile = default.env, default.env is located in $ORACLE_HOME/forms90/server or in $ORACLE_HOME/forms/server
2. Add the line e.g.
TMP=/tmp
(assuming that the directory specified e.g. /tmp has sufficient space and read/write privileges)
3. Stop and restart the OC4J_BI_Forms component for the change to take effect
Solution B:
Change the value for the TMP variable setting in opmn.xml by following all the steps below:
1. Edit $ORACLE_HOME/opmn/conf/opmn.xml where $ORACLE_HOME corresponds to the Oracle Application Server middle tier home (BI & Forms)
2. Look for the line e.g. on Unix
3. Check that the directory specified exists, has sufficient space and read/write privileges. If necessary, change the TMP value e.g.
4. If the directory has been changed in opmn.xml, stop and restart all the middle tier components for the change to take effect.
FRM-41838 Unable to open temporary record buffer file C:\DOCUMEN~1\LOCALS~1\Temp\1\s7e8.2
Cause
Oracle Forms tries to create a temporary file if a query is issued and the number of records returned exceeds the block property 'Records Buffered' value. When using OracleAS Forms Services, the temporary file will be written by default to the directory specified for the TMP variable in:
$ORACLE_HOME/opmn/conf/opmn.xml.
If this directory does not exist, has insufficient space or has restricted read / write privileges the FRM-41838 error can occur.
Solution
Solution A:
1. Edit the Forms Listener Servlet env file. This will be the file referenced by the formsweb.cfg applet parameter envFile. By default, envFile = default.env, default.env is located in $ORACLE_HOME/forms90/server or in $ORACLE_HOME/forms/server
2. Add the line e.g.
TMP=/tmp
(assuming that the directory specified e.g. /tmp has sufficient space and read/write privileges)
3. Stop and restart the OC4J_BI_Forms component for the change to take effect
Solution B:
Change the value for the TMP variable setting in opmn.xml by following all the steps below:
1. Edit $ORACLE_HOME/opmn/conf/opmn.xml where $ORACLE_HOME corresponds to the Oracle Application Server middle tier home (BI & Forms)
2. Look for the line e.g. on Unix
3. Check that the directory specified exists, has sufficient space and read/write privileges. If necessary, change the TMP value e.g.
4. If the directory has been changed in opmn.xml, stop and restart all the middle tier components for the change to take effect.
ORA -3136 WARNING inbound connection timed out
The "WARNING: inbound connection timed out (ORA-3136)" in the alert log indicates that the client was not able to complete it's authentication within the period of time specified by parameter SQLNET.INBOUND_CONNECT_TIMEOUT.
You may also witness ORA-12170 without timeout error on the database server sqlnet.log file.
This entry would also have the client address which failed to get authenticated. Some applications or JDBC thin driver applications may not have these details.
From 10.2.0.1 onwards the default value of parameter SQLNET.INBOUND_CONNECT_TIMEOUT is 60 seconds, hence if the client is not able authenticate within 60 secs , the warning would appear in the alert log and the client connection will be terminated.
There can be three main reasons for this error -
The default value of 60 seconds is good enough in most conditions for the database server to authenticate a client connection. If it is taking longer, then its worth checking all the below points before going for the workaround:
1. Check whether local connection on the database server is successful & quick.
2. If local connections are quick ,then check for underlying network delay with the help of your network administrator.
3. Check whether your Database performance has degraded in anyway.
4. Check alert log for any critical errors for eg, ORA-600 or ORA-7445 and get them resolved first.
These critical errors might have triggered the slowness of the database server.
As a workaround to avoid only this warning messages, you can set the parameters SQLNET.INBOUND_CONNECT_TIMEOUT
and
INBOUND_CONNECT_TIMEOUT_
to the value more than 60.
For e.g 120. So that the client will have more time to provide the authentication information to the database. You may have to further tune these parameter values according to your setup.
To set these parameter -
1. In server side sqlnet.ora file add
SQLNET.INBOUND_CONNECT_TIMEOUT
For e.g
You may also witness ORA-12170 without timeout error on the database server sqlnet.log file.
This entry would also have the client address which failed to get authenticated. Some applications or JDBC thin driver applications may not have these details.
From 10.2.0.1 onwards the default value of parameter SQLNET.INBOUND_CONNECT_TIMEOUT is 60 seconds, hence if the client is not able authenticate within 60 secs , the warning would appear in the alert log and the client connection will be terminated.
There can be three main reasons for this error -
- Server gets a connection request from a malicious client which is not supposed to connect to the database , in which case the error thrown is the correct behavior. You can get the client address for which the error was thrown via sqlnet log file.
- The server receives a valid client connection request but the client takes a long time to authenticate more than the default 60 seconds.
- The DB server is heavily loaded due to which it cannot finish the client logon within the timeout specified.
The default value of 60 seconds is good enough in most conditions for the database server to authenticate a client connection. If it is taking longer, then its worth checking all the below points before going for the workaround:
1. Check whether local connection on the database server is successful & quick.
2. If local connections are quick ,then check for underlying network delay with the help of your network administrator.
3. Check whether your Database performance has degraded in anyway.
4. Check alert log for any critical errors for eg, ORA-600 or ORA-7445 and get them resolved first.
These critical errors might have triggered the slowness of the database server.
As a workaround to avoid only this warning messages, you can set the parameters SQLNET.INBOUND_CONNECT_TIMEOUT
and
INBOUND_CONNECT_TIMEOUT_
to the value more than 60.
For e.g 120. So that the client will have more time to provide the authentication information to the database. You may have to further tune these parameter values according to your setup.
To set these parameter -
1. In server side sqlnet.ora file add
SQLNET.INBOUND_CONNECT_TIMEOUT
For e.g
SQLNET.INBOUND_CONNECT_TIMEOUT = 120
2. In listener.ora file -
INBOUND_CONNECT_TIMEOUT_ = 110
For e.g if the listener name is LISTENER then -
How to check whether inbound timeout is active for the listener and database server:
For example, INBOUND_CONNECT_TIMEOUT_ =110
You can check whether the parameter is active or not by simply doing telnet to the listener port.
$ telnet
for eg.
The telnet session should disconnect after 110 seconds which indicates that the inbound connection timeout for the listener is active.
Alternatively, check at the LSNRCTL prompt using:
LSNRCTL>set current_listener
LSNRCTL>show inbound_connect_timeout
To check whether database server SQLNET.INBOUND_CONNECT_TIMEOUT is active:
Eg.
a. For Dedicated server setup, enable the support level sqlnet server tracing will show the timeout value as below:
b. For shared Server setup,
$ telnet
Example.
The telnet session should disconnect after 120 seconds which indicates that the sqlnet.inbound_connect_timeout is active.
INBOUND_CONNECT_TIMEOUT_
For e.g if the listener name is LISTENER then -
INBOUND_CONNECT_TIMEOUT_LISTENER = 110
How to check whether inbound timeout is active for the listener and database server:
For example, INBOUND_CONNECT_TIMEOUT_
You can check whether the parameter is active or not by simply doing telnet to the listener port.
$ telnet
for eg.
$ telnet 123.23.23.23 1521
The telnet session should disconnect after 110 seconds which indicates that the inbound connection timeout for the listener is active.
Alternatively, check at the LSNRCTL prompt using:
LSNRCTL>set current_listener
LSNRCTL>show inbound_connect_timeout
To check whether database server SQLNET.INBOUND_CONNECT_TIMEOUT is active:
Eg.
SQLNET.INBOUND_CONNECT_TIMEOUT=120
a. For Dedicated server setup, enable the support level sqlnet server tracing will show the timeout value as below:
niotns: Enabling CTO, value=120000 (milliseconds) <== 120 seconds
niotns: Not enabling dead connection detection.
niotns: listener bequeathed shadow coming to life...
b. For shared Server setup,
$ telnet
Example.
$ telnet 123.23.23.23 51658
The telnet session should disconnect after 120 seconds which indicates that the sqlnet.inbound_connect_timeout is active.
Ref:
465043.1
An Error Occurred While Starting OPMN Integrated Standalone Reports Server
While starting an OPMN integrated Standalone Reports Server via OEM, the following error is encountered :
An error occurred while starting "Reports Server:".
: ReportsServer - time out while waiting for a managed process to start
For more information, look at the logs using the related link below.
Related Link Error Logs
Cause
The value specified in the "REPORTS_TMP" environment variable was invalid because there was no directory named "C:\DOCUME~1\dev\LOCALS~1\Temp\1".
Solution
To implement the solution, please execute the following steps :
An error occurred while starting "Reports Server:
For more information, look at the logs using the related link below.
Related Link Error Logs
Cause
The value specified in the "REPORTS_TMP" environment variable was invalid because there was no directory named "C:\DOCUME~1\dev\LOCALS~1\Temp\1".
Solution
To implement the solution, please execute the following steps :
- Go to Windows Registry.
- Navigate to HKEY_LOCAL_MACHINE -> SOFTWARE -> Oracle.
- Select the HOME under Oracle where the Oracle 10g Application Server is installed.
- Change the value of the "REPORTS_TMP" environment variable from "C:\DOCUME~1\dev\LOCALS~1\Temp\1" to an existing directory. For example "c:\tmp".
- Now start the Reports Server and it will start properly.
FRM-92101 Error When Running Forms After 70-75 Concurrent Connections and No New Connection Possible
This problem is specific to MS Windows Operating Systems - normally Windows 2000 or 2003. Although the occurrence of this issue is not common, it appears on both 32 and 64 bit windows. You may receive the following error indications:
Forms 6i, 9.0.4.X, 10.1.2.X - FRM-92050 sometimes together with FRM-92101 or FRM-92102
Forms 9.0.4.X, 10.1.2.X - application.log shows FRM-93000
Forms 6i (Forms Listener Servlet) - jserv.log shows FRM-93000
Forms 6i (Forms Server) - error log shows ERROR 1444 - FORMS SERVICE STOPS UNEXPECTEDLY 1444
This happens with a small number of connections - the range may be 50 - 80. The specific number depends on operating system factors such as number of CPUs, processor speed, memory, etc. The problem may occur even with less than 50.
Cause
Windows has a memory resource called the Desktop Heap. A particular section of this resource is running low and causes the problem.Solution
1. Change the logon-property from the responsible service to "[x] Allow service to interact with desktop" (using all the defaults of the registry!) and restart the service.This forces the Apache Service (Forms 6i) or the Oracle Process Manager Service (OPMN) (Forms 9i and above) to use the IO Desktop heap which is larger than the Non-IO Desktop heap by default. The term "Desktop", here, is not to be confused with the normal Windows desktop, which holds your icons and your background, etc. In this context, it is Microsoft terminology for an area of memory.
Once verified that this increases the number of possible frmweb processes, then return this setting back to its default value - unchecked - and modify the Non-IO Desktop Heap size in the Registry as it is used by the Middle Tier OPMN Service.
2. Modify the Non-IO Desktop Heap size variable in the registry. This should not be done unless:
- A backup has been made of the registry. Any error in modification of the registry can render the system unusable.
- Solution 1 has been tested and has increased the number of processes that will run successfully.
The following information applies to Windows NT, Windows 2000, Windows 2003.
Enter the Registry and go to HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\SubSystems.
Double click on the Windows node value in the right frame. Increase the appropriate SharedSection parameter which will be the third one. For example, the SharedSection specifies the system and desktop heaps using the following format:
SharedSection=xxxx,yyyy,zzzz The default values are normally 1024,3072,512
All the values are in kilobytes (KB).
xxxx = System-wide Heapsize. There is no need to modify this value and it is ignored for the rest of this discussion.
yyyy = IO Desktop Heapsize. This is the heap for memory objects in the IO Desktop and there is normally no need to modify this value. Programs that are run from a command line (.bat files) or an icon use this memory resource.
zzzz = Non-IO Desktop Heapsize. This is the heap for memory objects in the Non-IO Desktop and is used by programs that are spawned from a service. Therefore, this is the value that should be increased if the OPMN process is being started as a service which is the default for a windows installation of the Application Server. This number can be set as high as 30 MB (30720 KB). However, it is seldom necessary to set it this high.
If you change these values, you must reboot the system.
The values are recommended to be a multiples of 512, but this isn't necessary. There is a hard limit total of 48 Mb for the the total heap size for Windows.
The values are recommended to be a multiples of 512, but this isn't necessary. There is a hard limit total of 48 Mb for the the total heap size for Windows.
Data Encryption
The DBMS_OBFUSCATION_TOOLKIT package provides a simple API for data encryption. Oracle8i only provides DES encryption whilst Oracle9i also includes DES3 encryption. In this article I'll create a simple package containing functions to DES encrypt and decrypt test strings:
First we create the package header containing the two conversion functions.
CREATE OR REPLACE PACKAGE toolkit AS
FUNCTION encrypt (p_text IN VARCHAR2) RETURN RAW;
FUNCTION decrypt (p_raw IN RAW) RETURN VARCHAR2;
END toolkit;
/
Create the Package Body
All VARCHAR2 inputs are padded to multiples of 8 charaters, with the encryption key also being a multiple of 8 charaters. The encryption key and padding characters can be altered to suit.
CREATE OR REPLACE PACKAGE BODY toolkit AS
g_key RAW(32767) := UTL_RAW.cast_to_raw('12345678');
g_pad_chr VARCHAR2(1) := '~';
PROCEDURE padstring (p_text IN OUT VARCHAR2);
-- --------------------------------------------------
FUNCTION encrypt (p_text IN VARCHAR2) RETURN RAW IS
-- --------------------------------------------------
l_text VARCHAR2(32767) := p_text;
l_encrypted RAW(32767);
BEGIN
padstring(l_text);
DBMS_OBFUSCATION_TOOLKIT.desencrypt(input => UTL_RAW.cast_to_raw(l_text),
key => g_key,
encrypted_data => l_encrypted);
RETURN l_encrypted;
END;
-- --------------------------------------------------
-- --------------------------------------------------
FUNCTION decrypt (p_raw IN RAW) RETURN VARCHAR2 IS
-- --------------------------------------------------
l_decrypted VARCHAR2(32767);
BEGIN
DBMS_OBFUSCATION_TOOLKIT.desdecrypt(input => p_raw,
key => g_key,
decrypted_data => l_decrypted);
RETURN RTrim(UTL_RAW.cast_to_varchar2(l_decrypted), g_pad_chr);
END;
-- --------------------------------------------------
-- --------------------------------------------------
PROCEDURE padstring (p_text IN OUT VARCHAR2) IS
-- --------------------------------------------------
l_units NUMBER;
BEGIN
IF LENGTH(p_text) MOD 8 > 0 THEN
l_units := TRUNC(LENGTH(p_text)/8) + 1;
p_text := RPAD(p_text, l_units * 8, g_pad_chr);
END IF;
END;
-- --------------------------------------------------
END toolkit;
/
Test It
We can test the basic functionality using the following code.
DECLARE
l_value VARCHAR2(16) := 'ORACLE-BASE';
l_raw RAW(16);
BEGIN
DBMS_OUTPUT.put_line('l_value: ' || l_value);
l_raw := toolkit.encrypt(l_value);
DBMS_OUTPUT.put_line('l_raw: ' || l_raw);
DBMS_OUTPUT.put_line('Original Value : ' || toolkit.decrypt(l_raw));
END;
/
Remember that the length of the output from the encryption routine will be rounded up to the next multiple of 8 characters. If the results are to be stored as RAW datatypes in the database you must make sure enough room is allocated. SQL*Plus displays the contents of RAW variable in HEX so it appears to be twice as long as it actually is.
Encrypt Table Data
The following code provides a simple example of how data in a table might be encrypted using a trigger. First we must create a test table with an appropriate trigger.
CREATE TABLE encrypted_data (
username VARCHAR2(20),
data RAW(16)
);
CREATE OR REPLACE TRIGGER encrypted_data_biur_trg
BEFORE INSERT OR UPDATE ON encrypted_data
FOR EACH ROW
DECLARE
BEGIN
:new.data := toolkit.encrypt(UTL_RAW.cast_to_varchar2(:new.data));
END;
/
Next we test the trigger using some simple insert, update and query statements.
SQL> INSERT INTO encrypted_data (username, data)
2 VALUES ('tim_hall', UTL_RAW.cast_to_raw('My Secret Data'));
1 row created.
SQL> SELECT * FROM encrypted_data;
USERNAME DATA
-------------------- ----------------------------------------
tim_hall FA57C55510D258C73DE93059E3DC49EC
1 row selected.
SQL> COLUMN data FORMAT A40
SQL> SELECT username, toolkit.decrypt(data) AS data FROM encrypted_data;
USERNAME DATA
-------------------- ----------------------------------------
tim_hall My Secret Data
1 row selected.
SQL> UPDATE encrypted_data
2 SET data = UTL_RAW.cast_to_raw('My NEW Secret')
3 WHERE username = 'tim_hall';
1 row updated.
SQL> COLUMN data FORMAT A40
SQL> SELECT username, toolkit.decrypt(data) AS data FROM encrypted_data;
USERNAME DATA
-------------------- ----------------------------------------
tim_hall My NEW Secret
1 row selected.
SQL>
With the exception of the calls to the UTL_RAW package, this method hides most of the work from the developer.For more information see:
- Create the Package
- Create the Package Body
- Test It
- Encrypt Table Data
First we create the package header containing the two conversion functions.
CREATE OR REPLACE PACKAGE toolkit AS
FUNCTION encrypt (p_text IN VARCHAR2) RETURN RAW;
FUNCTION decrypt (p_raw IN RAW) RETURN VARCHAR2;
END toolkit;
/
Create the Package Body
All VARCHAR2 inputs are padded to multiples of 8 charaters, with the encryption key also being a multiple of 8 charaters. The encryption key and padding characters can be altered to suit.
CREATE OR REPLACE PACKAGE BODY toolkit AS
g_key RAW(32767) := UTL_RAW.cast_to_raw('12345678');
g_pad_chr VARCHAR2(1) := '~';
PROCEDURE padstring (p_text IN OUT VARCHAR2);
-- --------------------------------------------------
FUNCTION encrypt (p_text IN VARCHAR2) RETURN RAW IS
-- --------------------------------------------------
l_text VARCHAR2(32767) := p_text;
l_encrypted RAW(32767);
BEGIN
padstring(l_text);
DBMS_OBFUSCATION_TOOLKIT.desencrypt(input => UTL_RAW.cast_to_raw(l_text),
key => g_key,
encrypted_data => l_encrypted);
RETURN l_encrypted;
END;
-- --------------------------------------------------
-- --------------------------------------------------
FUNCTION decrypt (p_raw IN RAW) RETURN VARCHAR2 IS
-- --------------------------------------------------
l_decrypted VARCHAR2(32767);
BEGIN
DBMS_OBFUSCATION_TOOLKIT.desdecrypt(input => p_raw,
key => g_key,
decrypted_data => l_decrypted);
RETURN RTrim(UTL_RAW.cast_to_varchar2(l_decrypted), g_pad_chr);
END;
-- --------------------------------------------------
-- --------------------------------------------------
PROCEDURE padstring (p_text IN OUT VARCHAR2) IS
-- --------------------------------------------------
l_units NUMBER;
BEGIN
IF LENGTH(p_text) MOD 8 > 0 THEN
l_units := TRUNC(LENGTH(p_text)/8) + 1;
p_text := RPAD(p_text, l_units * 8, g_pad_chr);
END IF;
END;
-- --------------------------------------------------
END toolkit;
/
Test It
We can test the basic functionality using the following code.
DECLARE
l_value VARCHAR2(16) := 'ORACLE-BASE';
l_raw RAW(16);
BEGIN
DBMS_OUTPUT.put_line('l_value: ' || l_value);
l_raw := toolkit.encrypt(l_value);
DBMS_OUTPUT.put_line('l_raw: ' || l_raw);
DBMS_OUTPUT.put_line('Original Value : ' || toolkit.decrypt(l_raw));
END;
/
Remember that the length of the output from the encryption routine will be rounded up to the next multiple of 8 characters. If the results are to be stored as RAW datatypes in the database you must make sure enough room is allocated. SQL*Plus displays the contents of RAW variable in HEX so it appears to be twice as long as it actually is.
Encrypt Table Data
The following code provides a simple example of how data in a table might be encrypted using a trigger. First we must create a test table with an appropriate trigger.
CREATE TABLE encrypted_data (
username VARCHAR2(20),
data RAW(16)
);
CREATE OR REPLACE TRIGGER encrypted_data_biur_trg
BEFORE INSERT OR UPDATE ON encrypted_data
FOR EACH ROW
DECLARE
BEGIN
:new.data := toolkit.encrypt(UTL_RAW.cast_to_varchar2(:new.data));
END;
/
Next we test the trigger using some simple insert, update and query statements.
SQL> INSERT INTO encrypted_data (username, data)
2 VALUES ('tim_hall', UTL_RAW.cast_to_raw('My Secret Data'));
1 row created.
SQL> SELECT * FROM encrypted_data;
USERNAME DATA
-------------------- ----------------------------------------
tim_hall FA57C55510D258C73DE93059E3DC49EC
1 row selected.
SQL> COLUMN data FORMAT A40
SQL> SELECT username, toolkit.decrypt(data) AS data FROM encrypted_data;
USERNAME DATA
-------------------- ----------------------------------------
tim_hall My Secret Data
1 row selected.
SQL> UPDATE encrypted_data
2 SET data = UTL_RAW.cast_to_raw('My NEW Secret')
3 WHERE username = 'tim_hall';
1 row updated.
SQL> COLUMN data FORMAT A40
SQL> SELECT username, toolkit.decrypt(data) AS data FROM encrypted_data;
USERNAME DATA
-------------------- ----------------------------------------
tim_hall My NEW Secret
1 row selected.
SQL>
With the exception of the calls to the UTL_RAW package, this method hides most of the work from the developer.For more information see:
View redo logs by logminer
Step 1: Check supplemental log is enabled for database, if not enable it
connect / as sysdba;
SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE; -- it should be yes for logmining
alter database add supplemental log data; --so that db can store additional data for logmining
Step 2: Run the following block to add logs and start logminer .
BEGIN
DBMS_LOGMNR.ADD_LOGFILE ('D:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST\REDO03.LOG', DBMS_LOGMNR.NEW);
DBMS_LOGMNR.ADD_LOGFILE ('D:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST\REDO02.LOG', DBMS_LOGMNR.ADDFILE);
DBMS_LOGMNR.ADD_LOGFILE ('D:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST\REDO01.LOG', DBMS_LOGMNR.ADDFILE);
DBMS_LOGMNR.START_LOGMNR ( options => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY);
END;
Step 3: Mine the logs
Use the database control file to find and add redo log files that satisfy your specified time or SCN range to the LogMiner redo log file list.
ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
BEGIN
DBMS_LOGMNR.START_LOGMNR(STARTTIME => '01-Mar-2010 15:30:00',ENDTIME => '02-Mar-2010 15:45:00',
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE);
END;
Step 4: Choose any object to view the logs about it
select sql_redo from v$logmnr_contents where table_name = 'CUST_PAYMENT_INFO' and operation='INSERT';
SELECT SQL_REDO FROM V$LOGMNR_CONTENTS
WHERE
SEG_NAME = 'CUST_PAYMENT_INFO' AND
SEG_OWNER = 'OE' AND
OPERATION = 'UPDATE' AND
DBMS_LOGMNR.MINE_VALUE(REDO_VALUE, 'OE.ACTIVE_CARD') != DBMS_LOGMNR.MINE_VALUE(UNDO_VALUE, 'OE.ACTIVE_CARD');
connect / as sysdba;
SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE; -- it should be yes for logmining
alter database add supplemental log data; --so that db can store additional data for logmining
Step 2: Run the following block to add logs and start logminer .
BEGIN
DBMS_LOGMNR.ADD_LOGFILE ('D:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST\REDO03.LOG', DBMS_LOGMNR.NEW);
DBMS_LOGMNR.ADD_LOGFILE ('D:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST\REDO02.LOG', DBMS_LOGMNR.ADDFILE);
DBMS_LOGMNR.ADD_LOGFILE ('D:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST\REDO01.LOG', DBMS_LOGMNR.ADDFILE);
DBMS_LOGMNR.START_LOGMNR ( options => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY);
END;
Step 3: Mine the logs
Use the database control file to find and add redo log files that satisfy your specified time or SCN range to the LogMiner redo log file list.
ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
BEGIN
DBMS_LOGMNR.START_LOGMNR(STARTTIME => '01-Mar-2010 15:30:00',ENDTIME => '02-Mar-2010 15:45:00',
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE);
END;
Step 4: Choose any object to view the logs about it
select sql_redo from v$logmnr_contents where table_name = 'CUST_PAYMENT_INFO' and operation='INSERT';
SELECT SQL_REDO FROM V$LOGMNR_CONTENTS
WHERE
SEG_NAME = 'CUST_PAYMENT_INFO' AND
SEG_OWNER = 'OE' AND
OPERATION = 'UPDATE' AND
DBMS_LOGMNR.MINE_VALUE(REDO_VALUE, 'OE.ACTIVE_CARD') != DBMS_LOGMNR.MINE_VALUE(UNDO_VALUE, 'OE.ACTIVE_CARD');
Using ASMCMD Utility
After creating you will want to manage your asm disksgroups , the task can be done with the help of sql connecting through ASM instance, but it will be good to know about the asmcmd utility also to do the asm managment.
Step1: set ORACLE_HOME on prompt
C:\Documents and Settings\inam>set ORACLE_HOME=D:\oracle\product\10.2.0\db_1
Step 2: set ORACLE_SID
C:\Documents and Settings\inam>set ORACLE_SID=+ASM
Step 3: Run the utility and use some commands
C:\Documents and Settings\inam>asmcmd
ASMCMD>
ASMCMD> pwd
+DB_DATA
ASMCMD> ls
ASMDB/
ASMCMD> pwd
+DB_DATA
ASMCMD> cd asmdb
ASMCMD> pwd
+DB_DATA/asmdb
ASMCMD> ls
CONTROLFILE/
DATAFILE/
ONLINELOG/
TEMPFILE/
ASMCMD> cd datafile
ASMCMD> pwd
+DB_DATA/asmdb/datafile
ASMCMD> ls
SYSAUX.261.733408851
SYSTEM.259.733408841
UNDOTBS1.260.733408851
Copying files from ASM to file system (11g)
ASMCMD> ls -lt
State Type Rebal Name
MOUNTED EXTERN N DBDATA/
MOUNTED EXTERN N DBFLASH/
ASMCMD> pwd
+dbdata/dbrac/datafile
ASMCMD> ls -lt
Type Redund Striped Time Sys Name
N db_ts04.dbf => +DBDATA/DBRAC/DATAFILE/DB.272.766401017
N db_ts03.dbf => +DBDATA/DBRAC/DATAFILE/DB.271.766400995
N db_ts02.dbf => +DBDATA/DBRAC/DATAFILE/DB.270.766400973
N db_ts01.dbf => +DBDATA/DBRAC/DATAFILE/DB.269.766400951
DATAFILE UNPROT COARSE APR 07 13:00:00 Y USERS.259.766393285
DATAFILE UNPROT COARSE APR 07 13:00:00 Y UNDOTBS2.265.766393499
DATAFILE UNPROT COARSE APR 07 13:00:00 Y UNDOTBS1.280.767361619
DATAFILE UNPROT COARSE APR 07 13:00:00 Y UNDOTBS1.258.766393285
DATAFILE UNPROT COARSE APR 07 13:00:00 Y SYSTEM.256.766393283
DATAFILE UNPROT COARSE APR 07 13:00:00 Y SYSAUX.257.766393285
DATAFILE UNPROT COARSE APR 07 13:00:00 Y DB.272.766401017
DATAFILE UNPROT COARSE APR 07 13:00:00 Y DB.271.766400995
DATAFILE UNPROT COARSE APR 07 13:00:00 Y DB.270.766400973
DATAFILE UNPROT COARSE APR 07 13:00:00 Y DB.269.766400951
N test_netbkup.dbf => +DBDATA/DBRAC/DATAFILE/TESTT_NETBKUP.279.76640125
3
N recop1.dbf => +DBDATA/DBRAC/DATAFILE/RECOP1.278.766401249
N dblog02.dbf => +DBDATA/DBRAC/DATAFILE/DBLOG.274.766632369
N dblog01.dbf => +DBDATA/DBRAC/DATAFILE/DBLOG.275.766632361
N examcont01.dbf => +DBDATA/DBRAC/DATAFILE/EXAMCONT.276.766401211
N exam1425_01.dbf => +DBDATA/DBRAC/DATAFILE/EXAM14251426.277.766401221
DATAFILE UNPROT COARSE APR 07 12:00:00 Y UNDOTBS2.281.767362637
DATAFILE UNPROT COARSE APR 07 12:00:00 Y TESTT_NETBKUP.279.766401253
DATAFILE UNPROT COARSE APR 07 12:00:00 Y RECOP1.278.766401249
DATAFILE UNPROT COARSE APR 07 12:00:00 Y DBLOG.275.766632361
DATAFILE UNPROT COARSE APR 07 12:00:00 Y DBLOG.274.766632369
DATAFILE UNPROT COARSE APR 07 12:00:00 Y DB.273.766401039
DATAFILE UNPROT COARSE APR 07 12:00:00 Y EXAMPLE.264.766393395
DATAFILE UNPROT COARSE APR 07 12:00:00 Y EXAMCONT.276.766401211
DATAFILE UNPROT COARSE APR 07 12:00:00 Y EXAM14251426.277.766401221
ASMCMD>
ASMCMD> cp test_netbkup.dbf d:\temp
copying +dbdata/dbrac/datafile/test_netbkup.dbf -> d:\temp/test_netbkup.dbf
lsct: Lists information about current Oracle ASM clients from the V$ASM_CLIENT view.
ASMCMD> iostat -G DATA
lsdg: Uses V$ASM_DISKGROUP_STAT view to list information about a particular disk group
ASMCMD> lsattr -l -G DATA
Use the setattr command to change an attribute
Step1: set ORACLE_HOME on prompt
C:\Documents and Settings\inam>set ORACLE_HOME=D:\oracle\product\10.2.0\db_1
Step 2: set ORACLE_SID
C:\Documents and Settings\inam>set ORACLE_SID=+ASM
Step 3: Run the utility and use some commands
C:\Documents and Settings\inam>asmcmd
ASMCMD>
ASMCMD> pwd
+DB_DATA
ASMCMD> ls
ASMDB/
ASMCMD> pwd
+DB_DATA
ASMCMD> cd asmdb
ASMCMD> pwd
+DB_DATA/asmdb
ASMCMD> ls
CONTROLFILE/
DATAFILE/
ONLINELOG/
TEMPFILE/
ASMCMD> cd datafile
ASMCMD> pwd
+DB_DATA/asmdb/datafile
ASMCMD> ls
SYSAUX.261.733408851
SYSTEM.259.733408841
UNDOTBS1.260.733408851
Copying files from ASM to file system (11g)
ASMCMD> ls -lt
State Type Rebal Name
MOUNTED EXTERN N DBDATA/
MOUNTED EXTERN N DBFLASH/
ASMCMD> pwd
+dbdata/dbrac/datafile
ASMCMD> ls -lt
Type Redund Striped Time Sys Name
N db_ts04.dbf => +DBDATA/DBRAC/DATAFILE/DB.272.766401017
N db_ts03.dbf => +DBDATA/DBRAC/DATAFILE/DB.271.766400995
N db_ts02.dbf => +DBDATA/DBRAC/DATAFILE/DB.270.766400973
N db_ts01.dbf => +DBDATA/DBRAC/DATAFILE/DB.269.766400951
DATAFILE UNPROT COARSE APR 07 13:00:00 Y USERS.259.766393285
DATAFILE UNPROT COARSE APR 07 13:00:00 Y UNDOTBS2.265.766393499
DATAFILE UNPROT COARSE APR 07 13:00:00 Y UNDOTBS1.280.767361619
DATAFILE UNPROT COARSE APR 07 13:00:00 Y UNDOTBS1.258.766393285
DATAFILE UNPROT COARSE APR 07 13:00:00 Y SYSTEM.256.766393283
DATAFILE UNPROT COARSE APR 07 13:00:00 Y SYSAUX.257.766393285
DATAFILE UNPROT COARSE APR 07 13:00:00 Y DB.272.766401017
DATAFILE UNPROT COARSE APR 07 13:00:00 Y DB.271.766400995
DATAFILE UNPROT COARSE APR 07 13:00:00 Y DB.270.766400973
DATAFILE UNPROT COARSE APR 07 13:00:00 Y DB.269.766400951
N test_netbkup.dbf => +DBDATA/DBRAC/DATAFILE/TESTT_NETBKUP.279.76640125
3
N recop1.dbf => +DBDATA/DBRAC/DATAFILE/RECOP1.278.766401249
N dblog02.dbf => +DBDATA/DBRAC/DATAFILE/DBLOG.274.766632369
N dblog01.dbf => +DBDATA/DBRAC/DATAFILE/DBLOG.275.766632361
N examcont01.dbf => +DBDATA/DBRAC/DATAFILE/EXAMCONT.276.766401211
N exam1425_01.dbf => +DBDATA/DBRAC/DATAFILE/EXAM14251426.277.766401221
DATAFILE UNPROT COARSE APR 07 12:00:00 Y UNDOTBS2.281.767362637
DATAFILE UNPROT COARSE APR 07 12:00:00 Y TESTT_NETBKUP.279.766401253
DATAFILE UNPROT COARSE APR 07 12:00:00 Y RECOP1.278.766401249
DATAFILE UNPROT COARSE APR 07 12:00:00 Y DBLOG.275.766632361
DATAFILE UNPROT COARSE APR 07 12:00:00 Y DBLOG.274.766632369
DATAFILE UNPROT COARSE APR 07 12:00:00 Y DB.273.766401039
DATAFILE UNPROT COARSE APR 07 12:00:00 Y EXAMPLE.264.766393395
DATAFILE UNPROT COARSE APR 07 12:00:00 Y EXAMCONT.276.766401211
DATAFILE UNPROT COARSE APR 07 12:00:00 Y EXAM14251426.277.766401221
ASMCMD>
ASMCMD> cp test_netbkup.dbf d:\temp
copying +dbdata/dbrac/datafile/test_netbkup.dbf -> d:\temp/test_netbkup.dbf
lsct: Lists information about current Oracle ASM clients from the V$ASM_CLIENT view.
ASMCMD> lsct
ASMCMD> lsct
DB_Name Status Software_Version Compatible_version Instance_Name Disk_Group
+ASM CONNECTED 11.2.0.3.0 11.2.0.3.0 +asm1 DBDATA
+ASM CONNECTED 11.2.0.3.0 11.2.0.3.0 +asm1 DBFLASH
TESTDB CONNECTED 11.2.0.3.0 11.2.0.0.0 homedb DBDATA
du: Total space in MB used by files – particular directory can also be specified
ASMCMD> du
Used_MB Mirror_used_MB
63576 63576
find: we can use the wildcard or can specify a particular file type by using
the ‘–type’ clause
ASMCMD> find --type ONLINELOG +DATA *
ASMCMD> find +DATA example*
iostat: Uses the V$ASM_DISK_IOSTAT view to display I/O statistics of disks in mounted ASM disk groupsASMCMD> iostat -G DATA
lsdg: Uses V$ASM_DISKGROUP_STAT view to list information about a particular disk group
ASMCMD> lsdg DATA
lsattr: List attributes of a disk groupASMCMD> lsattr -l -G DATA
Use the setattr command to change an attribute
ASMCMD> setattr -G data compatible.rdbms 11.2.0.0.0
ASMCMD> lsattr -l -G DATA
Create database (10gR2) manually on Windows based on ASM instance
Step 1: set ORACLE_SID on dos prompt
C:\Documents and Settings\inam>set ORACLE_SID=ASMDB
Step 2: Create parameter for the database instance to be created
D:\ASMTEST\ASMDB\pfile\initASMDB.ora
control_files = +DB_DATA
undo_management = AUTO
db_name = ASMDB
db_block_size = 8192
sga_max_size = 1073741824
sga_target = 1073741824
db_create_file_dest = +DB_DATA
db_create_online_log_dest_1 = +DB_DATA
Step 3: Create a password file
C:\Documents and Settings\inam>orapwd file=D:\ASMTEST\ASMDB\pwdASMDB.ora password=oracle entries=5
Step 4: create/Start the instance
C:\Documents and Settings\inam>oradim -new -sid ASMDB -syspwd asmdb123 -pfile D:\ASMTEST\ASMDB\pfile\initASMDB.ora -startmode a
Instance created.
C:\Documents and Settings\inam>sqlplus / as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Tue Oct 26 12:37:05 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> shutdown immediate
SQL> startup nomount pfile=D:\ASMTEST\ASMDB\pfile\initASMDB.ora
ORACLE instance started.
Total System Global Area 1073741824 bytes
Fixed Size 1253124 bytes
Variable Size 264241404 bytes
Database Buffers 801112064 bytes
Redo Buffers 7135232 bytes
SQL>
Step 5: Create the database
SQL> create database ASMDB
2 character set WE8ISO8859P1
3 national character set utf8
4 undo tablespace undotbs1
5 default temporary tablespace temp;
Database created.
Step 6: Run following scripts
SQL> @D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\catalog.sql
SQL> @D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\catproc.sql
SQL> @D:\oracle\product\10.2.0\db_1\sqlplus\admin\pupbld.sql
Step 7: Test DB ( some admin task)
Create the service for your newly created db in tnsnames.ora and access via toad or sqlplus for your testing.
SQL> create tablespace myts_on_ASM datafile '+DB_DATA' size 200M
Note: if you are on linux skip step 4 as there is no oradim for linux , other steps are same.
C:\Documents and Settings\inam>set ORACLE_SID=ASMDB
Step 2: Create parameter for the database instance to be created
D:\ASMTEST\ASMDB\pfile\initASMDB.ora
control_files = +DB_DATA
undo_management = AUTO
db_name = ASMDB
db_block_size = 8192
sga_max_size = 1073741824
sga_target = 1073741824
db_create_file_dest = +DB_DATA
db_create_online_log_dest_1 = +DB_DATA
Step 3: Create a password file
C:\Documents and Settings\inam>orapwd file=D:\ASMTEST\ASMDB\pwdASMDB.ora password=oracle entries=5
Step 4: create/Start the instance
C:\Documents and Settings\inam>oradim -new -sid ASMDB -syspwd asmdb123 -pfile D:\ASMTEST\ASMDB\pfile\initASMDB.ora -startmode a
Instance created.
C:\Documents and Settings\inam>sqlplus / as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Tue Oct 26 12:37:05 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> shutdown immediate
SQL> startup nomount pfile=D:\ASMTEST\ASMDB\pfile\initASMDB.ora
ORACLE instance started.
Total System Global Area 1073741824 bytes
Fixed Size 1253124 bytes
Variable Size 264241404 bytes
Database Buffers 801112064 bytes
Redo Buffers 7135232 bytes
SQL>
Step 5: Create the database
SQL> create database ASMDB
2 character set WE8ISO8859P1
3 national character set utf8
4 undo tablespace undotbs1
5 default temporary tablespace temp;
Database created.
Step 6: Run following scripts
SQL> @D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\catalog.sql
SQL> @D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\catproc.sql
SQL> @D:\oracle\product\10.2.0\db_1\sqlplus\admin\pupbld.sql
Step 7: Test DB ( some admin task)
Create the service for your newly created db in tnsnames.ora and access via toad or sqlplus for your testing.
SQL> create tablespace myts_on_ASM datafile '+DB_DATA' size 200M
Note: if you are on linux skip step 4 as there is no oradim for linux , other steps are same.
Create ASM Instance(Manually) on Windows
Automatic Storage Management (ASM) is an integrated file system and volume manager expressly built for Oracle database files. ASM provides the performance of raw I/O with the easy management of a file system. It simplifies database administration by eliminating the need for you to directly manage potentially thousands of Oracle database files. It does this by enabling you to divide all available storage into disk groups. You manage a small set of disk groups and ASM automates the placement of the database files within those disk groups.
Resetting Listener Log without stopping Database
Often it is required to reset the listener log when it grows too much like more than 2G, as this log is being used by the instance all the time you can not delete. By doing the following you can achieve your purpose to reset the listener log.
C:\Documents and Settings\inam>lsnrctl
LSNRCTL for 32-bit Windows: Version 10.2.0.1.0 - Production on 01-AUG-2010 10:33:31
Copyright (c) 1991, 2005, Oracle. All rights reserved.
Welcome to LSNRCTL, type "help" for information.
LSNRCTL> SET CURRENT_LISTENER LISTENER
Current Listener is LISTENER
LSNRCTL> SET LOG_FILE TEMP_LISTENER.LOG
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=srv2.domain)(PORT=1521)))
LISTENER parameter "log_file" set to temp_listener.log
The command completed successfully
LSNRCTL> SET LOG_FILE LISTENER.LOG
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=srv2.domain)(PORT=1521)))
LISTENER parameter "log_file" set to listener.log
The command completed successfully
LSNRCTL>
C:\Documents and Settings\inam>lsnrctl
LSNRCTL for 32-bit Windows: Version 10.2.0.1.0 - Production on 01-AUG-2010 10:33:31
Copyright (c) 1991, 2005, Oracle. All rights reserved.
Welcome to LSNRCTL, type "help" for information.
LSNRCTL> SET CURRENT_LISTENER LISTENER
Current Listener is LISTENER
LSNRCTL> SET LOG_FILE TEMP_LISTENER.LOG
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=srv2.domain)(PORT=1521)))
LISTENER parameter "log_file" set to temp_listener.log
The command completed successfully
LSNRCTL> SET LOG_FILE LISTENER.LOG
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=srv2.domain)(PORT=1521)))
LISTENER parameter "log_file" set to listener.log
The command completed successfully
LSNRCTL>
Map a FTP site to a Drive letter in Windows
The easiest way to access files and folders on a FTP server as windows drive letter, is to install the NetDrive and configure it for your FTP server. Now you can view your folder and files on FTP as normal windows folders.
NetDrive can be downloaded from here
Sunday, November 28, 2010
Remove the local Streams configuration
Connect as sysadmin on the source and capture site eg; HOMEDEV & ASMDB and execute the following.
SELECT propagation_name FROM dba_propagation; -- get propagation if existing
exec dbms_propagation_adm.drop_propagation('PROPAGATION$_61'); -- parameter value from query above.
exec dbms_streams_adm.remove_streams_configuration;
Related Posts:
SELECT propagation_name FROM dba_propagation; -- get propagation if existing
exec dbms_propagation_adm.drop_propagation('PROPAGATION$_61'); -- parameter value from query above.
exec dbms_streams_adm.remove_streams_configuration;
Related Posts:
Streams 10gR2 - How to |
One-Way SCHEMA Level Streams Replication |
One-Way SCHEMA Level Streams Replication
In a nutshell, replication using Oracle Streams is implemented in the following way:
1.-A background capture process is configured to capture changes made to tables,schemas, or the entire database. The capture process captures changes from the redo log and formats each captured change into a logical change record (LCR). The capture process uses logminer to mine the redo/archive logs to format LCRs.
2.-The capture process enqueues LCR events into a queue that is specified.
3.-This queue is scheduled to Propagate events from one queue to another in a different database.
4.-A background apply process dequeues the events and applies them at the destination database.
For Streams overview & required configurations before using it please see Streams 10gR2 - How to
Please setup you environment , I've setup my environment as following, keep this info in your mind while doing the steps:
HOMEDEV Global Database name of the Source (capture) Site
ASMDB Global Database name of the Target (apply) Site
On HOMEDEV (SOURCE/CAPTURE)
Create the user for stream administration
conn sys/homedev@homedev as sysdba
create user STRMADMIN identified by STRMADMIN
ALTER USER STRMADMIN DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP
QUOTA UNLIMITED ON USERS;
GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
SELECT * FROM global_name;
HOMEDEV.REGRESS.RDBMS.DEV.US.ORACLE.COM
On ASMDB (TARGET/APPLY)
create user STRMADMIN identified by STRMADMIN
create tablespace users datafile '+DB_DATA/asmdb/datafile/USERS.dbf' size 200M
create tablespace EXAMPLE datafile '+DB_DATA/asmdb/datafile/EXAMPLE.dbf' size 200M
ALTER USER STRMADMIN DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP
QUOTA UNLIMITED ON USERS;
GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
create tablespace users datafile '+DB_DATA/asmdb/datafile/USERS.dbf' size 200M
create tablespace EXAMPLE datafile '+DB_DATA/asmdb/datafile/EXAMPLE.dbf' size 200M
ALTER USER STRMADMIN DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP
QUOTA UNLIMITED ON USERS;
GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
ALTER DATABASE RENAME GLOBAL_NAME TO
ASMDB.REGRESS.RDBMS.DEV.US.ORACLE.COM
connect STRMADMIN/STRMADMIN@HOMEDEV
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'STREAMS_QUEUE_TABLE',
queue_name => 'STREAMS_QUEUE',
queue_user => 'STRMADMIN');
END;
/
conn sys/homedev@homedev as sysdba
create public database link ASMDB.REGRESS.RDBMS.DEV.US.ORACLE.COM using 'ASMDB';
connect STRMADMIN/STRMADMIN@HOMEDEV
create database link ASMDB.REGRESS.RDBMS.DEV.US.ORACLE.COM connect to strmadmin
identified by strmadmin;
/* STEP 2.- Connect as the Streams Administrator in the target site strm2.net and create the streams queue */
connect STRMADMIN/STRMADMIN@ASMDB
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'STREAMS_QUEUE_TABLE',
queue_name => 'STREAMS_QUEUE',
queue_user => 'STRMADMIN');
END;
/
/* STEP 3.- Add apply rules for the Schema at the destination database */
BEGINDBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'HR',
streams_type => 'APPLY ',
streams_name => 'STREAM_APPLY',
queue_name => 'STRMADMIN.STREAMS_QUEUE',
include_dml => true,
include_ddl => true,
source_database => 'HOMEDEV.REGRESS.RDBMS.DEV.US.ORACLE.COM');
END;
/* STEP 4.- Add capture rules for the schema HR at the source database */
CONN STRMADMIN/STRMADMIN@HOMEDEVBEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'HR',
streams_type => 'CAPTURE',
streams_name => 'STREAM_CAPTURE',
queue_name => 'STRMADMIN.STREAMS_QUEUE',
include_dml => true,
include_ddl => true,
source_database => 'HOMEDEV.REGRESS.RDBMS.DEV.US.ORACLE.COM');
END;
/* STEP 5.- Add propagation rules for the schema HR at the source database. This step will also create a propagation job to the destination database */
BEGINDBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
schema_name => 'HR',
streams_name => 'STREAM_PROPAGATE',
source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@ASMDB.REGRESS.RDBMS.DEV.US.ORACLE.COM',
include_dml => true,
include_ddl => true,
source_database => 'HOMEDEV.REGRESS.RDBMS.DEV.US.ORACLE.COM');
END;
/* STEP 6.- Export, import and instantiation of tables from Source to Destination Database; if the objects are not present in the destination database, perform an export of the objects from the source database and import them into the destination database
Export from the Source Database:
Specify the OBJECT_CONSISTENT=Y clause on the export command.
By doing this, an export is performed that is consistent for each individual object at a particular system change number (SCN). */
D:\Home_Backup\STREAMTEST_EXP>exp userid=system/manager@homedev owner=HR file=hr.dmp log=hr_exp.log object_consistent=Y statisti
=NONE
/* Import into the Destination Database:
Specify STREAMS_INSTANTIATION=Y clause in the import command.
By doing this, the streams metadata is updated with the appropriate information in the destination database corresponding to the SCN that is recorded in the export file */
D:\Home_Backup\STREAMTEST_EXP>imp userid=system/manager@asmdb full=Y constraints=Y file=hr.dmp ignore=y commit=y log=hr_imp.log st
reams_instantiation=y
/* STEP 7.- Specify an 'APPLY USER' at the destination database.
This is the user who would apply all statements and DDL statements.
The user specified in the APPLY_USER parameter must have the necessary privileges to perform DML and DDL changes on the apply objects. */
conn strmadmin/strmadmin@asmdb
BEGIN
DBMS_APPLY_ADM.ALTER_APPLY(
apply_name => 'STREAM_APPLY',
apply_user => 'HR');
END;
/
/* STEP 8.- Set stop_on_error to false so apply does not abort for every error; then, start the Apply process on the destination */
conn strmadmin/strmadmin@asmdb
BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'STREAM_APPLY',
parameter => 'disable_on_error',
value => 'n');
END;
/
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'STREAM_APPLY',
parameter => 'disable_on_error',
value => 'n');
END;
/
DECLARE
v_started number;
BEGIN
SELECT decode(status, 'ENABLED', 1, 0) INTO v_started
FROM DBA_APPLY WHERE APPLY_NAME = 'STREAM_APPLY';
if (v_started = 0) then
DBMS_APPLY_ADM.START_APPLY(apply_name => 'STREAM_APPLY');
end if;
END;
/
v_started number;
BEGIN
SELECT decode(status, 'ENABLED', 1, 0) INTO v_started
FROM DBA_APPLY WHERE APPLY_NAME = 'STREAM_APPLY';
if (v_started = 0) then
DBMS_APPLY_ADM.START_APPLY(apply_name => 'STREAM_APPLY');
end if;
END;
/
/* STEP 9.- Set up capture to retain 7 days worth of logminer checkpoint information, then start the Capture process on the source */
conn strmadmin/strmadmin@homedev
BEGIN
DBMS_CAPTURE_ADM.ALTER_CAPTURE(
capture_name => 'STREAM_CAPTURE',
checkpoint_retention_time => 7);
END;
/
begin
DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'STREAM_CAPTURE');
end;
/
conn HR/HR@homedev
insert into HR.DEPARTMENTS values (991,'OTHER',205,1700);
commit;
alter table HR.EMPLOYEES add (NEWCOL VARCHAR2(10));
create table hr.t(id number)
Related Posts:
Streams 10gR2 - How to |
Remove the local Streams configuration |
Subscribe to:
Posts (Atom)