All posts by Carlos Magno

Carlos Magno – Experience of over 15 years as Oracle DBA on complex projects in countries such as Brazil , India , the Netherlands, Luxembourg , France and Italy . Certifications : OCP 10g , 11g OCP , OCP 12c, OCE RAC , Exadata , ITIL and OEM 12c Cloud Control Implementation Specialist. https://lu.linkedin.com/in/carlos-magno-andrade-junior-5541835

Encrypt Tablespaces – Multitenant

References:
https://oracle-base.com/articles/12c/multitenant-transparent-data-encryption-tde-12cr1

https://docs.oracle.com/database/121/SQLRF/statements_1003.htm#SQLRF55976
https://docs.oracle.com/cloud/latest/db121/ASOAG/asotrans_config.htm#ASOAG10474

The tablespace will be create inside of Pluggable Database: PDBPROD3

Datafile location: /u02/oradata/cdbprod/pdbprod3/
 Keystore location: /u01/oracle/admin/$ORACLE_SID/encryption_keystore/

1.) Include in SQLNET.ora:
 ENCRYPTION_WALLET_LOCATION =  
(SOURCE =(METHOD = FILE)(METHOD_DATA =    
(DIRECTORY = /u01/oracle/admin/$ORACLE_SID/encryption_keystore/)

2.)Create the directory, if doesn’t exist:

mkdir -p /u01/oracle/admin/$ORACLE_SID/encryption_keystore/

3.)Create the KEY store:

[oracle@vm1 admin]$ cd /u01/oracle/admin/$ORACLE_SID/encryption_keystore/
[oracle@vm1 encryption_keystore]$ pwd
/u01/oracle/admin/cdbprod/encryption_keystore

Inside of Root Container execute:

ADMINISTER KEY MANAGEMENT CREATE KEYSTORE 
'/u01/oracle/admin/cdbprod/encryption_keystore/' IDENTIFIED BY oracle;

SQL> !ls -ltr /u01/oracle/admin/cdbprod/encryption_keystore/
 total 4
 -rw-r--r--. 1 oracle oinstall 2408 Dec 13 16:35 ewallet.p12

4.)Open the KEY from ROOT container:

ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY oracle 
CONTAINER=ALL;

5.) Activate the KEY from ROOT Container, please note the Container=ALL:

ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY oracle 
WITH BACKUP CONTAINER=ALL;

6.) Check the Keys:
 SELECT con_id, key_id FROM v$encryption_keys;

    CON_ID KEY_ID
---------- --------------------------------------------------------------
         0 AZoc0eniGk+fvxf6hFmTPX8AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
         0 AWT1SYl9Y09Sv5KM701+2ngAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
         0 ATAVGrOY8k8nv3VLAuTAjwwAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
         0 AZK1GySH8k+Jv2lpedKOP/8AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
         0 AZyR5AVR3k9Mv6r7CBrT1REAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

COLUMN wrl_parameter FORMAT A50

SELECT * FROM v$encryption_wallet;

WRL_TYPE            WRL_PARAMETER                                   STATUS     WALLET_TYPE            WALLET_OR FULLY_BAC    CON_ID
-------------------- -------------------------------------------------- ---------- -------------------- --------- --------- ----------
FILE                /u01/oracle/admin/cdbprod/encryption_keystore/  OPEN       PASSWORD               SINGLE    NO                0

7.) Use the KEY

SQL> alter session set container=pdbprod3;

Session altered.

Create tablespace ts_tde datafile 
'/u02/oradata/cdbprod/pdbprod3/ts_tde.dbf' 
size 100m ENCRYPTION USING 'AES256' DEFAULT STORAGE (ENCRYPT);
  • Note: if the CDB is restarted we need to open again the KEY
    from the ROOT:
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY oracle 
CONTAINER=ALL;

 

Advertisements

External Table – Compressed Files

https://docs.oracle.com/database/121/ADMIN/tables.htm#ADMIN12898
Below the example with External tables using PREPROCESSOR option,
this is usefull where you have compressed files. The example has been
done with Pluggable Database.

Files:
C1.txt
C2.txt
ENG,England,English
SCO,Scotland,English
IRE,Ireland,English
WAL,Wales,Welsh
FRA,France,French
GER,Germany,German
USA,Unites States of America,English
 gzip c1.txt c2.txt

-rw-r--r--. 1 oracle oinstall       77 Dec 12 16:00 c1.txt.gz
-rw-r--r--. 1 oracle oinstall       92 Dec 12 16:00 c2.txt.gz

sqlplus / as sysdba
SQL> alter session set container=PDBPROD2;
Session altered.
  • Create a directory where the DATA is placed:
    SQL> create or replace directory dir_dba as '/home/oracle/dba';
    
    Directory created.
    
    SQL> grant read , write on directory dir_dba to public;
    
    Grant succeeded.
  • Create a directory pointing to the Utility that will be used to
    UNCOPRESS the file:
    
    [oracle@vm1 dba]$ which gunzip
    /usr/bin/gunzip
    
    [oracle@vm1 dba]$ which zcat
    
    /usr/bin/zcat
    
    SQL> create or replace directory dir_bin as '/usr/bin';
    
    Directory created.
    
    SQL> grant read, write on directory dir_bin to public;
    
    Grant succeeded.
    
    Drop table cmagno.countries_ext;
    
    CREATE TABLE cmagno.countries_ext (
      country_code      VARCHAR2(5),
      country_name      VARCHAR2(50),
      country_language  VARCHAR2(50)
    )
    ORGANIZATION EXTERNAL (
      TYPE ORACLE_LOADER
      DEFAULT DIRECTORY dir_dba
      ACCESS PARAMETERS (
        RECORDS DELIMITED BY NEWLINE
        PREPROCESSOR dir_bin:'zcat'
        FIELDS TERMINATED BY ','
        MISSING FIELD VALUES ARE NULL
        (
          country_code      CHAR(5),
          country_name      CHAR(50),
          country_language  CHAR(50)
        )
      )
      LOCATION ('c1.txt.gz','c1.txt.gz')
    )
    PARALLEL 1
    REJECT LIMIT UNLIMITED;
    
    SQL> select * from cmagno.countries_ext;
    
    COUNT COUNTRY_NAME                                   COUNTRY_LANGUAGE
    ----- -------------------------------------------------- -----------------
    ENG   England                                        English
    SCO   Scotland                                       English
    IRE   Ireland                                        English
    WAL   Wales                                          Welsh
    ENG   England                                        English
    SCO   Scotland                                       English
    IRE   Ireland                                        English
    WAL   Wales                                          Welsh
    
    After you can check inside of the data directory to see the logs:
    
    -rw-r--r--. 1 oracle oinstall       77 Dec 12 16:00 c1.txt.gz
    -rw-r--r--. 1 oracle oinstall       92 Dec 12 16:00 c2.txt.gz
    -rw-r--r--. 1 oracle oinstall      676 Dec 12 16:59 COUNTRIES_EXT_15232.log
    -rw-r--r--. 1 oracle oinstall      676 Dec 12 16:59 COUNTRIES_EXT_15230.log

RMAN Convert between different platforms (Solaris -> Linux)

https://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmxplat.htm#BRADV89983
For this, I created a tablespace in a SUN Solaris 12c environment and did an export tts.

Tablespace name: TS_TEST
Schema Objects : HR
Objects : T1, T2, T3 tables, to checked inside of PDBPROD3 after the import
TARGET : CDBPROD:PDBPROD3 
All tables have 39 rows

Export:

BANNER CON_ID
-------------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production 0
PL/SQL Release 12.1.0.2.0 - Production 0
CORE 12.1.0.2.0 Production 0
TNS for Solaris: Version 12.1.0.2.0 - Production 0
NLSRTL Version 12.1.0.2.0 - Production 0

SQL> SELECT PLATFORM_NAME
FROM V$TRANSPORTABLE_PLATFORM
WHERE PLATFORM_ID =
 ( SELECT PLATFORM_ID
 FROM V$DATABASE ); 2 3 4 5

PLATFORM_NAME
-----------------------------------------------------------------------------------------------------
Solaris Operating System (x86-64)

SQL> alter tablespace ts_test read only;
Tablespace altered.

PDB: PDDPOC01
oracle@vdds-racdb-01:~$ expdp hr/hr@PDDPOC01 directory=dir_tmp transport_tablespaces=ts_test

Export: Release 12.1.0.2.0 - Production on Thu Nov 30 16:09:10 2017
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
Starting "HR"."SYS_EXPORT_TRANSPORTABLE_01": hr/********@PDDPOC01 directory=dir_tmp transport_tablespaces=ts_test
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "HR"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for HR.SYS_EXPORT_TRANSPORTABLE_01 is:

/tmp/expdat.dmp

******************************************************************************

Datafiles required for transportable tablespace TS_TEST:
 +DATA/DDDSRACDB0/5926684743AC6B8BE054020820FA629A/DATAFILE/ts_test.dbf

Job "HR"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Thu Nov 30 16:10:03 2017 elapsed 0 00:00:42

ORACLE_SID = [DDDSRACD1] ? +ASM1
The Oracle base remains unchanged with value /u01/oracle
oracle@vdds-racdb-01:~$ asmcmd cp +DATA/DDDSRACDB0/5926684743AC6B8BE054020820FA629A/DATAFILE/ts_test.dbf /tmp

copying +DATA/DDDSRACDB0/5926684743AC6B8BE054020820FA629A/DATAFILE/ts_test.dbf -> /tmp/ts_test.dbf




racle@racdb01:/tmp$ ls -ltr
total 20968
drwxr----- 2 oracle oinstall 178 Nov 30 10:53 hsperfdata_oracle
drwxrwxrwx 2 root root 117 Nov 30 16:30 oracle
-rw-r----- 1 oracle oinstall 229376 Nov 30 16:36 test.dmp
-rw-r----- 1 oracle oinstall 10493952 Nov 30 16:37 ts_test.dbf
drwxr-xr-x 2 root root 302 Nov 30 16:37 hsperfdata_root

ON VM1:

-rwxr-x---. 1 oracle oinstall 229376 Nov 30 16:39 test.dmp
-rwxr-x---. 1 oracle oinstall 10493952 Nov 30 16:39 ts_test.dbf

[oracle@vm1 dba]$ pwd

/home/oracle/dba

Copy the files to an existent directory into the server or create a new directory to be used during the import:

SQL> !ls -ltr /home/oracle/dba
total 10476
-rw-r--r--. 1 oracle oinstall 137 Nov 27 16:53 afiedt.buf
drwxrwxrwx. 2 oracle oinstall 64 Nov 30 10:56 scripts
-rwxr-x---. 1 oracle oinstall 229376 Nov 30 16:39 test.dmp
-rwxr-x---. 1 oracle oinstall 10493952 Nov 30 16:39 ts_test.dbf

SQL> create or replace directory dir_dba as '/home/oracle/dba';

Directory created.

SQL> grant read,write on directory dir_dba to public;

Grant succeeded.

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
 5 PDBPROD3 READ WRITE NO

Convert the datafile:

SELECT PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM;

Convert to the same directory where the datafiles are placed.

rman target /

RMAN> CONVERT DATAFILE '/home/oracle/dba/ts_test.dbf'
DB_FILE_NAME_CONVERT '/home/oracle/dba', '/u02/oradata/cdbprod/pdbprod3'
FROM PLATFORM 'Solaris Operating System (x86-64)';
2> 3>

Starting conversion at target at 30-NOV-17
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4
channel ORA_DISK_1: starting datafile conversion
input file name=/home/oracle/dba/ts_test.dbf
converted datafile=/u02/oradata/cdbprod/pdbprod3/ts_test.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
Finished conversion at target at 30-NOV-17

Starting Control File and SPFILE Autobackup at 30-NOV-17
piece handle=/u03/fra/CDBPROD/autobackup/2017_11_30/o1_mf_s_961435712_f20dfjv3_.bkp comment=NONE

Finished Control File and SPFILE Autobackup at 30-NOV-17

[oracle@vm1 datafile]$ ls -ltr /u02/oradata/cdbprod/pdbprod3/ts_test.dbf
-rw-r-----. 1 oracle oinstall 10493952 Nov 30 17:28 /u02/oradata/cdbprod/pdbprod3/ts_test.dbf

Check the TNS entry for PDBPROD3

PDBPROD3 =
 (DESCRIPTION =
 (ADDRESS_LIST =
 (ADDRESS = (PROTOCOL = TCP)(HOST = vm1)(PORT = 1521))
 )
 (CONNECT_DATA =
 (SERVER = DEDICATED)
 (SERVICE_NAME = pdbprod3)
 )
 )

--> Import the datafile

impdp system/oracle@pdbprod3 dumpfile=test.dmp directory=dir_dba transport_datafiles=/u02/oradata/cdbprod/pdbprod3/ts_test.dbf

Import: Release 12.1.0.2.0 - Production on Thu Nov 30 17:34:50 2017
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Source time zone is +02:00 and target time zone is +00:00.

Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/********@pdbprod3 dumpfile=test.dmp directory=dir_dba transport_datafiles=/u02/oradata/cdbprod/pdbprod3/ts_test.dbf
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Thu Nov 30 17:35:34 2017 elapsed 0 00:00:41

--> Check the information

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
 2 PDB$SEED READ ONLY NO
 3 PDBPROD1 READ WRITE NO
 4 PDBPROD2 READ WRITE NO
 5 PDBPROD3 READ WRITE NO
 6 PDBPROD4 READ WRITE NO

SQL> alter session set container=PDBPROD3;

Session altered.

SQL> SELECT COUNT(*) FROM HR.T1;
 COUNT(*)
----------
 39

SQL> SELECT COUNT(*) FROM HR.t2;
 COUNT(*)
----------
 39

SQL> SELECT COUNT(*) FROM HR.t3;
 COUNT(*)
----------
 39

Cleaning AUD$

Notes:
SCRIPT: Basic example to manage AUD$ table in 11.2 with dbms_audit_mgmt (Doc ID 1362997.1)
DBMS_AUDIT_MGMT.INIT_CLEANUP Fails With ORA-46267 (Doc ID 1508787.1)

This is a procedure to clean the AUD$ table to avoid locks during the truncate process. Even being online process is recommended to be executed in a maintenance window. For small environments or non-prod you can try execute online, without stop the application.

Script: purge_job.sql ==> Run this script as SYS database user account.

=> Please before start take the current size of AUD$ on DBA_SEGMENTS.
select sum(bytes)/(1024*1024) mb 
from dba_segments where segment_name='AUD$';

=> Create a new tablespace based on the value returned on the previous 
select.
create tablespace AUDTBS datafile '/oradata/db/audtbs1.dbf' 
size <VALUE>g autoextend on next 128m maxsize 20g;

=> Take the current number of rows, just to check after <OPTIONAL>
select count(*) from aud$;

prompt start of the script

set serveroutput on
prompt Change based on our customization done 
update dam_config_param$ set string_value='AUDTBS' 
where audit_trail_type#=1 and param_id=22;
commit;

prompt First Step: init cleanup (if not already)

BEGIN
IF NOT DBMS_AUDIT_MGMT.IS_CLEANUP_INITIALIZED(
DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD) THEN
dbms_output.put_line('Calling DBMS_AUDIT_MGMT.INIT_CLEANUP');
 DBMS_AUDIT_MGMT.INIT_CLEANUP(
audit_trail_type => dbms_audit_mgmt.AUDIT_TRAIL_AUD_STD,
default_cleanup_interval => 12);
else
 dbms_output.put_line('Cleanup for STD was already initialized');
end if;
end;
/

prompt revert back to default values again
update dam_config_param$ set string_value='SYSAUX' 
where audit_trail_type#=1 and param_id=22;
commit;

prompt set last archive timestamp to older than 7 days

begin
DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP(
audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
last_archive_time => sysdate - 7);
end;
/

prompt setup a purge job

BEGIN
DBMS_AUDIT_MGMT.CREATE_PURGE_JOB (
AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
AUDIT_TRAIL_PURGE_INTERVAL => 12,
AUDIT_TRAIL_PURGE_NAME => 'Standard_Audit_Trail_PJ',
USE_LAST_ARCH_TIMESTAMP => TRUE );
END;
/

prompt call DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP 
regularly to advance the last archive timestamp

create or replace 
procedure set_archive_retention (retention in number default 7) as
begin
 DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP(
 audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
 last_archive_time => sysdate - retention);
end;
/

BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'advance_archive_timestamp',
job_type => 'STORED_PROCEDURE',
job_action => 'SET_ARCHIVE_RETENTION',
number_of_arguments => 1,
start_date => SYSDATE,
repeat_interval => 'freq=hourly;interval=12' ,
enabled => false,auto_drop => FALSE);
dbms_scheduler.set_job_argument_value(
job_name =>'advance_archive_timestamp',
argument_position =>1,argument_value => 7);
DBMS_SCHEDULER.ENABLE('advance_archive_timestamp');
End;
/

BEGIN
DBMS_SCHEDULER.run_job (
job_name => 'advance_archive_timestamp',use_current_session => FALSE);
END;
/

prompt End of the script

To verify the purge status and configured jobs status execute 
the following queries.

select min(NTIMESTAMP#) from aud$;
select LAST_START_DATE,LAST_RUN_DURATION,NEXT_RUN_DATE 
from dba_scheduler_jobs where job_name='ADVANCE_ARCHIVE_TIMESTAMP';
select LAST_START_DATE,LAST_RUN_DURATION,NEXT_RUN_DATE 
from dba_scheduler_jobs where job_name='STANDARD_AUDIT_TRAIL_PJ';

 

Convert standby in a snapshot database

 

Here the article to how to create a dataguard with broker:
https://ezdba.wordpress.com/2017/05/15/dataguard-setup-with-broke
--- convert the physical standby database to a snapshot database 
and open it for read-write operations.

-- Primary: Boston hostname(vddl-ocme1-rac-01)
-- Standby: London hostname(vddl-ocm-odg-01)

1.) Terminal windows with LONDON variables

[oracle@vddl-ocm-odg-01 ~]$ . oraenv
ORACLE_SID = [oracle] ? london
The Oracle base remains unchanged with value /opt/oracle
[oracle@vddl-ocm-odg-01 ~]$

sqlplus / as sysdba

SQL> select database_role from v$database;

DATABASE_ROLE
----------------
PHYSICAL STANDBY

SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------
YES

SQL> show parameter DB_FLASHBACK_RETENTION_TARGET

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 1440

SQL> show parameter DB_RECOVERY_FILE_DEST

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string /opt/oracle/oradata/london/arc
db_recovery_file_dest_size big integer 11105M

select file_type,number_of_files,percent_space_used
from v$recovery_area_usage;

FILE_TYPE NUMBER_OF_FILES PERCENT_SPACE_USED
----------------------- -------------------- --------------------
CONTROL FILE 0 0
REDO LOG 4 1.8
ARCHIVED LOG 27 7.6
BACKUP PIECE 2 .31
IMAGE COPY 0 0
FLASHBACK LOG 6 2.7
FOREIGN ARCHIVED LOG 0 0
AUXILIARY DATAFILE COPY 0 0

2.) Stop the RECOVERY

alter database recover managed standby database cancel;

3.) Convert to snapshot

alter database convert to snapshot standby;
SQL> select database_role from v$database;

DATABASE_ROLE
----------------
SNAPSHOT STANDBY

SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------
YES

SQL> select name, storage_size from v$restore_point;

NAME STORAGE_SIZE
---------------------------------------- --------------------
SNAPSHOT_STANDBY_REQUIRED_06/19/2017 10: 52428800
33:40

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
MOUNTED

4.) Open the database

SQL> alter database open;

Database altered.

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

5.) Test

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
-------------------- ------------------------------ ---------- ----------
 2 PDB$SEED READ ONLY NO
 3 DEV1 MOUNTED
SQL>
SQL>
SQL> alter pluggable database dev1 open;

Pluggable database altered.

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
-------------------- ------------------------------ ---------- ----------
 2 PDB$SEED READ ONLY NO
 3 DEV1 READ WRITE NO
SQL>
SQL>
SQL> alter session set container=dev1;

Session altered.

SQL>
SQL> create table misc1 (x varchar2(50) );

Table created.

SQL> insert into misc1 values ('Test Row');

1 row created.

SQL> commit;

Commit complete.

SQL> conn / as sysdba
Connected.

INSTANCE_NAME
----------------
london

6.) Check the Primary database: Boston

. oraenv
boston

SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;

MAX(SEQUENCE#) THREAD#
-------------------- --------------------
 74 1

alter system switch logfile;

7.) Check the Sequence on standby snapshot database london.

SQL> SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;

MAX(SEQUENCE#) THREAD#
-------------------- --------------------
 75 1

Note: Here we have a sequence bigger than primary because on this configuration i am using
FAR SYNC also.

8.) Convert back to standby:

. oraenv

london

sqlplus / as sysdba

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 520093696 bytes
Fixed Size 2926272 bytes
Variable Size 356518208 bytes
Database Buffers 155189248 bytes
Redo Buffers 5459968 bytes
Database mounted.
SQL> alter database convert to physical standby;

Database altered.

CON_ID CON_NAME OPEN MODE RESTRICTED
-------------------- ------------------------------ ---------- ----------
 2 PDB$SEED MOUNTED
 3 DEV1 MOUNTED

SQL> select file_type,number_of_files,percent_space_used
from v$recovery_area_usage; 2

FILE_TYPE NUMBER_OF_FILES PERCENT_SPACE_USED
----------------------- -------------------- --------------------
CONTROL FILE 0 0
REDO LOG 4 1.8
ARCHIVED LOG 29 7.67
BACKUP PIECE 2 .31
IMAGE COPY 0 0
FLASHBACK LOG 6 2.7
FOREIGN ARCHIVED LOG 0 0
AUXILIARY DATAFILE COPY 0 0

8 rows selected.

SQL>
SQL>
SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
-------------------- ------------------------------ ---------- ----------
 2 PDB$SEED MOUNTED
 3 DEV1 MOUNTED
SQL> alter database open;

Database altered.

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
-------------------- ------------------------------ ---------- ----------
 2 PDB$SEED READ ONLY NO
 3 DEV1 MOUNTED
SQL> alter pluggable database dev1 open;

Pluggable database altered.

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
-------------------- ------------------------------ ---------- ----------
 2 PDB$SEED READ ONLY NO
 3 DEV1 READ ONLY NO
SQL> alter session set container=dev1;

Session altered.

SQL> select * from misc1;
select * from misc1
 *
ERROR at line 1:
ORA-00942: table or view does not exist




SQL> conn / as sysdba
Connected.

INSTANCE_NAME
----------------
london

1 row selected.




SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 520093696 bytes
Fixed Size 2926272 bytes
Variable Size 356518208 bytes
Database Buffers 155189248 bytes
Redo Buffers 5459968 bytes
Database mounted.
SQL> alter database recover managed standby database disconnect;

Database altered.


exit;

 

Dataguard – FAR SYNC

Here the article to how to create a dataguard with broker:
https://ezdba.wordpress.com/2017/05/15/dataguard-setup-with-broker/
FAR SYNC:
https://docs.oracle.com/database/121/SBYDB/create_fs.htm#SBYDB5441

Dataguard Far Sync.

Primary: Boston - hostname(vddl-ocme1-rac-01)
Standby: London - hostname(vddl-ocm-odg-01)

Far Sync:
bostonFS - hostname(vddl-ocm-odg-01)
londonFS - hostname(vddl-ocme1-rac-01)

tns entries on both sides for:

BOSTON 
LONDON 
LONDONFS 
BOSTONFS

1.)
create pfile='/tmp/initbostonFS.ora' from spfile;

2.)
alter database create far sync instance controlfile as '/tmp/bostonFS.ctl';

3.)
cp /opt/oracle/product/12.1.0.2/standalone/dbs/orapwboston /tmp

4.) 
[oracle@vddl-ocme1-rac-01 tmp]$ scp *boston* vddl-ocm-odg-01:/tmp
oracle@vddl-ocm-odg-01s password:
bostonFS.ctl 100% 17MB 17.4MB/s 00:00
initbostonFS.ora 100% 1983 1.9KB/s 00:00
orapwboston

5.) on Standby server:
mkdir -p /opt/oracle/oradata/bostonFS/
mkdir -p /opt/oracle/oradata/bostonFS/dev1/
mkdir -p /opt/oracle/fast_recovery_area/BOSTONFS
mkdir -p /opt/oracle/fast_recovery_area/bostonFS
mkdir -p /opt/oracle/admin/bostonFS/adump

6.) On Standby set the variables
. oraenv
bostonFS

7.) Edit

vi /tmp/initbostonFS.ora
:%s/boston/bostonFS/g

*.db_name='boston'
*.log_archive_config='dg_config=(boston,bostonFS,london,londonFS)'
*.log_archive_dest_2='SERVICE=london SYNC REOPEN=15 valid_for=(STANDBY_LOGFILES,STANDBY_ROLE) db_unique_name=london'
*.control_files='/opt/oracle/oradata/bostonFS/bostonFS01.ctl','/opt/oracle/fast_recovery_area/bostonFS/bostonFS02.ctl'
#*.local_listener='LISTENER_BOSTON'
*.db_unique_name=bostonFS
*.fal_server=boston
*.log_file_name_convert='boston','bostonFS'

8.) Copy controlfile

cp /tmp/bostonFS.ctl /opt/oracle/oradata/bostonFS/bostonFS01.ctl
cp /tmp/bostonFS.ctl /opt/oracle/fast_recovery_area/bostonFS/bostonFS02.ctl

9.) Copy Password FILE
cp /tmp/orapwboston $ORACLE_HOME/dbs/orapwbostonFS

10.) Check the environment variables to see if are pointed to bostonFS

set | grep ORA

11.) Create the spfile:

sqlplus / as sysdba
create spfile from pfile='/tmp/initbostonFS.ora';

12.) Mount the FAR Sync Instance:
startup mount;

13.) From Primary Server
sqlplus / as sysdba

--> Old Configurations
service="(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=vddl-ocm-odg-01.lux.eproseed.com)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=london.example.com)(SERVER=DEDICATED)))", ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="london" net_timeout=30,valid_for=(online_logfile,all_roles)

--> New configuration
alter system set log_archive_config='dg_config=(boston,bostonFS,london,londonFS,london2)' scope=both; --> Must be also on London Standby
alter system set log_archive_dest_2='SERVICE=bostonFS SYNC REOPEN=15 valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=bostonFS' scope=both;

14.) Check the sequences:

Primary:
SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;

MAX(SEQUENCE#) THREAD#
-------------------- --------------------
 49 1

alter system switch logfile;

SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;

MAX(SEQUENCE#) THREAD#
-------------------- --------------------
 50 1

StandBy: bostonFS,
At this time all of TNS must be working fine, check the logs (Broker) to see if there is no error inside.

SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;

MAX(SEQUENCE#) THREAD#
-------------------- --------------------
 50 1

StandBy: london:

SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;

MAX(SEQUENCE#) THREAD#
-------------------- --------------------
 50 1


15.) On standby server as bostonFS:
Check the Standby redolog files

1* select * from v$logfile
SQL> col member format a75
SQL> /

GROUP# STATUS TYPE MEMBER IS_ CON_ID
-------------------- ------- ------- --------------------------------------------------------------------------- --- --------------------
 3 ONLINE /opt/oracle/oradata/bostonFS/redo03.log NO 0
 2 ONLINE /opt/oracle/oradata/bostonFS/redo02.log NO 0
 1 ONLINE /opt/oracle/oradata/bostonFS/redo01.log NO 0
 4 STANDBY /opt/oracle/fast_recovery_area/BOSTONFS/onlinelog/o1_mf_4_dn7z8tbx_.log YES 0
 5 STANDBY /opt/oracle/fast_recovery_area/BOSTONFS/onlinelog/o1_mf_5_dn7z8wgm_.log YES 0
 6 STANDBY /opt/oracle/fast_recovery_area/BOSTONFS/onlinelog/o1_mf_6_dn7z8x7c_.log YES 0
 7 STANDBY /opt/oracle/fast_recovery_area/BOSTONFS/onlinelog/o1_mf_7_dn7z8tbm_.log YES 0

16.) Adding a second far sync, now for London standby. When London becomes primary the archived logs will sent to londonFS
instead of to send to boston.

. oraenv
boston

connect to vddl-ocme1-rac-01 - PRIMARY

[oracle@vddl-ocme1-rac-01 trace]$ cd /tmp
[oracle@vddl-ocme1-rac-01 tmp]$ ls -ltr
total 17988
drwx------. 2 root root 40 Jun 2 11:29 hsperfdata_root
-rw-r--r--. 1 oracle oinstall 135548 Jun 13 15:52 dwh_test.log
drwxr-xr-x. 2 oracle oinstall 60 Jun 14 22:19 hsperfdata_oracle
-rw-r--r--. 1 oracle oinstall 1983 Jun 16 16:01 initbostonFS.ora <-------
-rw-r-----. 1 oracle oinstall 18268160 Jun 16 16:01 bostonFS.ctl
-rw-r-----. 1 oracle oinstall 7680 Jun 16 16:02 orapwboston

mv /tmp/initbostonFS.ora /tmp/initlondonFS.ora

mkdir -p /opt/oracle/oradata/londonFS/
mkdir -p /opt/oracle/oradata/londonFS/dev1/
mkdir -p /opt/oracle/fast_recovery_area/LONDONFS
mkdir -p /opt/oracle/fast_recovery_area/londonFS
mkdir -p /opt/oracle/admin/londonFS/adump

17.) Fix the /tmp/initlondonFS.ora

:%s/boston/london/g

please fix:

*.db_name='boston'
*.control_files='/opt/oracle/oradata/londonFS/londonFS01.ctl','/opt/oracle/fast_recovery_area/londonFS/londonFS02.ctl'
*.log_archive_config='dg_config=(boston,bostonFS,london,londonFS,london2)'
*.log_file_name_convert='boston','londonFS'
*.log_archive_dest_2='SERVICE=boston ASYNC REOPEN=15 valid_for=(STANDBY_LOGFILES,STANDBY_ROLE) db_unique_name=boston'
*.db_unique_name=londonFS
*.fal_server=london

18.) Copy controlfiles

[oracle@vddl-ocme1-rac-01 tmp]$ ls -tlr /tmp/*.ctl
-rw-r-----. 1 oracle oinstall 18268160 Jun 16 16:01 /tmp/bostonFS.ctl

cp /tmp/bostonFS.ctl /opt/oracle/oradata/londonFS/londonFS01.ctl
cp /tmp/bostonFS.ctl /opt/oracle/fast_recovery_area/londonFS/londonFS02.ctl

19.) Copy Passwordfile

[oracle@vddl-ocme1-rac-01 tmp]$ ls -ltr /tmp/orapw*
-rw-r-----. 1 oracle oinstall 7680 Jun 16 16:02 /tmp/orapwboston

cp /tmp/orapwboston $ORACLE_HOME/dbs/orapwlondonFS

19.) env variables

please include londonFS in /etc/oratab

. oraenv
londonFS

20.) Create spfile for londonFS
sqlplus / as sysdba
create spfile from pfile='/tmp/initlondonFS.ora';
startup mount;

21.) On standby Server:

. oraenv
london

sqlplus / as sysdba

SQL> show parameter log_archive_dest_2

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2 string

alter system set log_archive_dest_2='SERVICE=londonFS SYNC REOPEN=15 valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=londonFS' scope=both;

Now the far sync for London database is done, this will be tested during the switchover tests.