Feed aggregator

Similarity Search with Oracle’s Vector Datatype

DBASolved - 10 hours 57 min ago

In my last two posts I showed you what the Oracle Vector Datatype is and how to update existing data […]

The post Similarity Search with Oracle’s Vector Datatype appeared first on DBASolved.

Categories: DBA Blogs

View with pivot and group by grouping sets work in 12c but not in 19. Error ORA-56903

Tom Kyte - 12 hours 27 min ago
Views with group by grouping sets and pivot directly or in referenced view which worjk in Oracle 12c fall with ORA-56903 error sys_op_pivot function is not allowed here in Oracle 19. Bit in view or referenced view don't have explicit call sys_op_pivot. Mybe Oracle use it during execution of views. Thanks in advance. Best regards. According sugestion I have put: alter session set optimizer_features_enable = '12.2.0.1' but error persists. It appeare in all view with grouping sets and with pivot clause in subview as base view or directly in current view. If view with pivot clause is subview subview work correctly.
Categories: DBA Blogs

Record / Check Login Information for Standby DBs

Tom Kyte - 12 hours 27 min ago
Hello We want to housekeep our user accounts and remove unsed and locked accounts. As far as I understand, the information in dba_users is from the primary DB. Users are not allowed to logon to the primary to query data, they must logon the read only standby (regulated by a trigger). When I look in dba_users on the standbys I can see several users that have not or never logged on: <code>select username, account_status, nvl(to_char(last_login),'never logged on') "Last Login" from dba_users where oracle_maintained = 'N' and username not in ('AAAAAAAAAAA','BBBBBB','CCCCCCC') and username not like '%READ%' and username not like '%Exxx%' order by "Last Login" desc;</code> USERNAME ACCOUNT_STATUS Last Login ------------------------------ --------------- ---------------------------------------- Pxxxxxxx OPEN never logged on Pxxxxxxx_03 LOCKED never logged on Pxxxxxxx_05 LOCKED never logged on Pxxxxxxx_04 LOCKED never logged on Pxxxxxxx_01 LOCKED never logged on BRxxxxxxx OPEN never logged on Pxxxxxxx_02 LOCKED never logged on Sxxxxxxx EXPIRED never logged on Jxxxxxxx EXPIRED never logged on Mxxxxxxx OPEN 2020-09-05:19:48:06 GMT+01:00 Bxxxxx OPEN 2020-09-05:19:19:52 GMT+01:00 Axxxxxx OPEN 2016-05-20:09:17:33 GMT+01:00 Pxxxxxxxxxx_01 OPEN 2016-04-21:10:48:34 GMT+01:00 Kxxxxx OPEN 2016-04-19:13:50:33 GMT+01:00 Pxxxxxxxxxx_01 OPEN 2016-04-13:14:18:17 GMT+01:00 However, this information from dba_users is identical on primary and standby DBs. The users told me that they have logged to the standby recently. As far as I understand the information in dba_users, also on the standby has been inherited from the primary as normal catalogue tables are not updated on the standby. Is this correct? How can I see last logins on the standby, preferably witthout using auditing which could cause a performance degredation and this is a production system where performance is key. Many thanks Alison We are using active dataguard, and our idea at the moment is to record logins to the standby using a trigger which checks if standby or primary and then writes logon data acroos a DB link into a table on primary. Many thanks
Categories: DBA Blogs

move table to new tablespace

Tom Kyte - 12 hours 27 min ago
Hi TOM i have oracle cluster database EE with 2 nodes, and i have a big table with a big LOBs row, so after compressing the LOBs files i did move them to a new tablespace,so the principal table became just about 300MB but it still taking space of 1.2Tb, my concerns are about the space why i cant reclaim this space, i created a new tablespace and i did table move but it doesn't work it took so much time and i did shrink but it doesn't work too, i think there is a problem with high watermark? what i have to do please to gain this space and thanks. The lobs were saved in the same tablespace with other data caled DATA having 1.2 Tb, after that i did move them to a new tablespace i created caled LOB_DATA the problem is the shrink space for table space DATA did nothing and the table move also doesn't work so how to reclaim the free extents in DATA 1.2Tb tablespace.
Categories: DBA Blogs

Capacity Planning

Tom Kyte - 12 hours 27 min ago
Hi Tom, I have a some questions regarding the capacity planning.Thanks in advance. 1.is there any way we can match LIOs & PIOs to the no of CPUs & no of disks ? 2.is there any place , i can find documents to do capacity planning for the oracle database/sun solaris environment? 3.I am very much confused about the sort_area_size my understanding is -- sort_area_size is the max threshold to do sort on memory and only one only sort_area_size per session .Alloc from UGA --sort_area_retained is to store the result set from SAS and it can be many per session at a time.is it correct? Alloc from PGA. When we do first sorting which is lesser than sort_area_size,the memory allocated from PGA or UGA? is it sort_area_size or sort_area_reatined? Thanks in advance Regards Jeyaseelan.M
Categories: DBA Blogs

Installing and Running Oracle AHF ORACHK on a 12.2 DB Server

Hemant K Chitale - 20 hours 13 min ago

 The Oracle Autonomous Health Framework is described in Support Document "Autonomous Health Framework (AHF) - Including TFA and ORAchk/EXAchk (Doc ID 2550798.1)"

In a recent video I have demonstrated running 24.1 orachk (with "-b" for "Best Practices Check) against a 21.3 RAC Cluster.

Here I demonstrate the installation and execution against a 12.2 non-RAC database.

When you download the 24.1 release of AHF (AHF-LINUX_v24.1.0.zip, approximately 410MB), you have to unzip it and then run ahf_setup.  It is preferable to use the default location /opt/oracle.ahf  (and precreate a "data" subfolder if it doesn't exist).

If your first attempt at installation returns an error :

[ERROR] : AHF-00074: Required Perl Modules not found :  Data::Dumper

you can check the perl version and download and install this module (Note : In the listings below "AHF_Installer is the location where I have extracted the installation zip file).


[root@vbgeneric AHF_Installer]# /bin/perl -v

This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
(with 34 registered patches, see perl -V for more detail)

Copyright 1987-2012, Larry Wall

Perl may be copied only under the terms of either the Artistic License or the
GNU General Public License, which may be found in the Perl 5 source kit.

Complete documentation for Perl, including FAQ lists, should be found on
this system using "man perl" or "perldoc perl".  If you have access to the
Internet, point your browser at http://www.perl.org/, the Perl Home Page.

[root@vbgeneric AHF_Installer]# yum install perl-Data-Dumper
Loaded plugins: langpacks, ulninfo
ol7_UEKR4                                                           | 3.0 kB  00:00:00     
ol7_latest                                                          | 3.6 kB  00:00:00     
(1/5): ol7_latest/x86_64/group_gz                                   | 136 kB  00:00:00     
(2/5): ol7_UEKR4/x86_64/updateinfo                                  | 130 kB  00:00:00     
(3/5): ol7_latest/x86_64/updateinfo                                 | 3.6 MB  00:00:00     
(4/5): ol7_latest/x86_64/primary_db                                 |  50 MB  00:00:02     
(5/5): ol7_UEKR4/x86_64/primary_db                                  |  37 MB  00:00:04     
Resolving Dependencies
--> Running transaction check
---> Package perl-Data-Dumper.x86_64 0:2.145-3.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===========================================================================================
 Package                   Arch            Version               Repository           Size
===========================================================================================
Installing:
 perl-Data-Dumper          x86_64          2.145-3.el7           ol7_latest           47 k

Transaction Summary
===========================================================================================
Install  1 Package

Total download size: 47 k
Installed size: 97 k
Is this ok [y/d/N]: y
Downloading packages:
perl-Data-Dumper-2.145-3.el7.x86_64.rpm                             |  47 kB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Installing : perl-Data-Dumper-2.145-3.el7.x86_64                                     1/1 
  Verifying  : perl-Data-Dumper-2.145-3.el7.x86_64                                     1/1 

Installed:
  perl-Data-Dumper.x86_64 0:2.145-3.el7                                                    

Complete!
[root@vbgeneric AHF_Installer]#


Then resume the installation (precreate the "data" folder if it doesn't exist)

[root@vbgeneric AHF_Installer]# mkdir /opt/oracle.ahf/data
[root@vbgeneric AHF_Installer]# ./ahf_setup

AHF Installer for Platform Linux Architecture x86_64

AHF Installation Log : /tmp/ahf_install_241000_6588_2024_02_27-13_48_51.log

Starting Autonomous Health Framework (AHF) Installation

AHF Version: 24.1.0 Build Date: 202402051317

Default AHF Location : /opt/oracle.ahf

Do you want to install AHF at [/opt/oracle.ahf] ? [Y]|N : Y

AHF Location : /opt/oracle.ahf

AHF Data Directory stores diagnostic collections and metadata.
AHF Data Directory requires at least 5GB (Recommended 10GB) of free space.

Please Enter AHF Data Directory : /opt/oracle.ahf/data

AHF Data Directory : /opt/oracle.ahf/data

Do you want to add AHF Notification Email IDs ? [Y]|N : N

Extracting AHF to /opt/oracle.ahf

Setting up AHF CLI and SDK

Configuring TFA Services

Discovering Nodes and Oracle Resources

Successfully generated certificates.

Starting TFA Services
Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.
Created symlink from /etc/systemd/system/graphical.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.

.-------------------------------------------------------------------------------.
| Host      | Status of TFA | PID  | Port  | Version    | Build ID              |
+-----------+---------------+------+-------+------------+-----------------------+
| vbgeneric | RUNNING       | 8540 | 39049 | 24.1.0.0.0 | 240100020240205131724 |
'-----------+---------------+------+-------+------------+-----------------------'

Running TFA Inventory...

Adding default users to TFA Access list...

.-------------------------------------------------------.
|              Summary of AHF Configuration             |
+-----------------+-------------------------------------+
| Parameter       | Value                               |
+-----------------+-------------------------------------+
| AHF Location    | /opt/oracle.ahf                     |
| TFA Location    | /opt/oracle.ahf/tfa                 |
| Orachk Location | /opt/oracle.ahf/orachk              |
| Data Directory  | /opt/oracle.ahf/data                |
| Repository      | /opt/oracle.ahf/data/repository     |
| Diag Directory  | /opt/oracle.ahf/data/vbgeneric/diag |
'-----------------+-------------------------------------'

Starting ORAchk Scheduler from AHF

AHF binaries are available in /opt/oracle.ahf/bin

AHF is successfully Installed

Do you want AHF to store your My Oracle Support Credentials for Automatic Upload ? Y|[N] : N

Moving /tmp/ahf_install_241000_6588_2024_02_27-13_48_51.log to /opt/oracle.ahf/data/vbgeneric/diag/ahf/

[root@vbgeneric AHF_Installer]# 


orachk can then be executed.  This execution is to check against "Best Practices"  :



[root@vbgeneric AHF_Installer]# orachk -b

List of running databases

1. orcl12c
2. None of above

Select databases from list for checking best practices. For multiple databases, select 1 for All or comma separated number like 1,2 etc [1-2][1]. 1
.  .
.  .  

Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS

.  .  . . . .  
.  .  .  .  .  .  .  .  .  
-------------------------------------------------------------------------------------------------------
                                                 Oracle Stack Status                          
-------------------------------------------------------------------------------------------------------
  Host Name       CRS Installed       ASM HOME  RDBMS Installed    CRS UP    ASM UP  RDBMS UP    DB Instance Name
-------------------------------------------------------------------------------------------------------
  vbgeneric                  No           No          Yes           No       No      Yes             orcl12c
-------------------------------------------------------------------------------------------------------


Copying plug-ins

. .
.  .  .  .  .  .  

*** Checking Best Practice Recommendations ( Pass / Warning / Fail ) ***

.  

============================================================
              Node name - vbgeneric
============================================================
. . . . . . 
 Collecting - Database Parameters for orcl12c database
 Collecting - Database Undocumented Parameters for orcl12c database
 Collecting - List of active logon and logoff triggers for orcl12c database
 Collecting - CPU Information
 Collecting - Disk I/O Scheduler on Linux
 Collecting - DiskMount Information
 Collecting - Kernel parameters
 Collecting - Maximum number of semaphore sets on system
 Collecting - Maximum number of semaphores on system
 Collecting - Maximum number of semaphores per semaphore set
 Collecting - Memory Information
 Collecting - OS Packages
 Collecting - Operating system release information and kernel version
 Collecting - Patches for RDBMS Home
 Collecting - Patches xml for RDBMS Home
 Collecting - RDBMS patch inventory
 Collecting - Table of file system defaults
 Collecting - number of semaphore operations per semop system call
 Collecting - Database Server Infrastructure Software and Configuration
 Collecting - Disk Information
 Collecting - Root user limits
 Collecting - Verify ORAchk scheduler configuration
 Collecting - Verify TCP Selective Acknowledgement is enabled
 Collecting - Verify no database server kernel out of memory errors
 Collecting - Verify the vm.min_free_kbytes configuration

Data collections completed. Checking best practices on vbgeneric.
------------------------------------------------------------

 INFO =>     Traditional auditing is enabled in database for orcl12c
 WARNING =>  Linux swap configuration does not meet recommendation
 WARNING =>  Hidden database initialization parameters should not be set per best practice recommendations for orcl12c
 FAIL =>     loopback interface MTU value needs to be set to 16436
 INFO =>     Most recent ADR incidents for /u01/app/oracle/product/12.2/db_1
 FAIL =>     Verify Database Memory Allocation
 INFO =>     Oracle GoldenGate failure prevention best practices
 FAIL =>     The vm.min_free_kbytes configuration is not set as recommended
 INFO =>     user_dump_dest has trace files older than 30 days for orcl12c
 INFO =>     At some times checkpoints are not being completed for orcl12c
 WARNING =>  One or more redo log groups are not multiplexed for orcl12c
 WARNING =>  Primary database is not protected with Data Guard (standby database) for real-time data protection and availability for orcl12c
 INFO =>     Important Storage Minimum Requirements for Grid & Database Homes
 CRITICAL => Operating system hugepages count does not satisfy total SGA requirements
 FAIL =>     Table AUD$[FGA_LOG$] should use Automatic Segment Space Management for orcl12c
 FAIL =>     Database parameter DB_LOST_WRITE_PROTECT is not set to recommended value on orcl12c instance
 INFO =>     umask for RDBMS owner is not set to 0022
 FAIL =>     Database parameter DB_BLOCK_CHECKING on primary is not set to the recommended value. for orcl12c
 INFO =>     Operational Best Practices
 INFO =>     Database Consolidation Best Practices
 INFO =>     Computer failure prevention best practices
 INFO =>     Data corruption prevention best practices
 INFO =>     Logical corruption prevention best practices
 INFO =>     Database/Cluster/Site failure prevention best practices
 INFO =>     Client failover operational best practices
 WARNING =>  Oracle patch 30712670 is not applied on RDBMS_HOME /u01/app/oracle/product/12.2/db_1
 WARNING =>  Oracle patch 29867728 is not applied on RDBMS_HOME /u01/app/oracle/product/12.2/db_1
 WARNING =>  Oracle patch 31142749 is not applied on RDBMS_HOME /u01/app/oracle/product/12.2/db_1
 WARNING =>  Oracle patch 26749785 is not applied on RDBMS_HOME /u01/app/oracle/product/12.2/db_1
 WARNING =>  Oracle patch 29302565 is not applied on RDBMS_HOME /u01/app/oracle/product/12.2/db_1
 WARNING =>  Oracle patch 29259068 is not applied on RDBMS_HOME /u01/app/oracle/product/12.2/db_1
 WARNING =>  Oracle clusterware is not being used
 WARNING =>  RAC Application Cluster is not being used for database high availability on orcl12c instance
 WARNING =>  DISK_ASYNCH_IO is NOT set to recommended value for orcl12c
 WARNING =>  Flashback on PRIMARY is not configured for orcl12c
 INFO =>     Database failure prevention best practices
 WARNING =>  fast_start_mttr_target has NOT been changed from default on orcl12c instance
 FAIL =>     Active Data Guard is not configured for orcl12c
 WARNING =>  Perl Patch 31858212 is not found in 12.2.0.1 RDBMS_HOME. /u01/app/oracle/product/12.2/db_1
 WARNING =>  Oracle patch 31602782 is not applied on RDBMS_HOME /u01/app/oracle/product/12.2/db_1
 WARNING =>  Oracle patch 33121934 is not applied on RDBMS_HOME /u01/app/oracle/product/12.2/db_1
 WARNING =>  Oracle patch 31211220 is not applied on RDBMS_HOME /u01/app/oracle/product/12.2/db_1
 INFO =>     Software maintenance best practices
 INFO =>     Oracle recovery manager(rman) best practices
 INFO =>     Database feature usage statistics for orcl12c
 WARNING =>  Consider investigating changes to the schema objects such as DDLs or new object creation for orcl12c
 WARNING =>  Consider investigating the frequency of SGA resize operations and take corrective action for orcl12c
------------------------------------------------------------

UPLOAD [if required] - /opt/oracle.ahf/data/vbgeneric/orachk/user_root/output/orachk_vbgeneric_orcl12c_022724_140315.zip

[root@vbgeneric AHF_Installer]# 


Thus, you can actually run the 24.1 orachk against even a 12.2 non-RAC (single instance) database.

The complete report is in HTML format in the final ZIP file.  

Here's the header :



Categories: DBA Blogs

Oracle Vector Datatype – Updating table data

DBASolved - Mon, 2024-02-26 14:36

  In my last blog post on Oracle’s Vector data type, I simply showed you how the datatype is used […]

The post Oracle Vector Datatype – Updating table data appeared first on DBASolved.

Categories: DBA Blogs

Different lists of dependencies

Tom Kyte - Mon, 2024-02-26 13:46
As part of a migration effort, I'm researching dependencies and am confused by the different results displayed by SQL Developer's Dependencies tab versus running something like the following: <code>SELECT * FROM user_dependencies WHERE name = 'USP_COMPANYIMPORT';</code> The former displays 19 rows, whereas the latter displays only 15 rows, including two where the REFERENCED_OWNER is SYS. Q1: Why the difference? Q2: Is it possible to view the code SQL Developer runs to obtain its results? Thank you.
Categories: DBA Blogs

UTL_HTTP

Tom Kyte - Mon, 2024-02-26 13:46
Is Oracle working on the Oracle Database PL/SQL package UTL_HTTP to add support for http_versions: HTTP/2 and HTTP/3?
Categories: DBA Blogs

Dropping and purging table does not release space back to the tablespace

Tom Kyte - Mon, 2024-02-26 13:46
Dear Tom, Oracle 4 node RAC version 19c In my tablespace I have total of 570 partitioned tables that are zero rows. Their initial extent is 8M for each partition, so collectively the empty tables are occupying 2286.03 GB. As they are not needed, I have started to drop them. After dropping some 300 tables, I wanted to check the space released. But this query shows the occupied space is not released. I always thought that if I drop a table with purge, the space would immediately be released back to the tablespace. What am I doing wrong? select round(sum (bytes/1024/1024/1024),2) GB from dba_segments Where tablespace_name='TOPREP_DAT' and owner ='SAMSUNGLTE'; GB --- 2286.03
Categories: DBA Blogs

DBLINK CONNECTION INETRUPTED_ROLLBACK NOT HAPPENS

Tom Kyte - Mon, 2024-02-26 13:46
User I HAVE SUCCESSFULLY RUNNED A JOB WHICH HAS STORED PROCEDURE OF TRUNCATE AND INSERT STATEMENTS USING DBLINK, IF ERROR OCCURS, ROLLBACK STAEMENTS ALSO THERE IN ORACLE PLSQL. WHILE DOING SO, TRUNCATED DONE AND DBLINK COMMUNICATION INTERUPPTED AND GOT END. SO, DATA TRUNCATED BUT NEITHER INSERTED NOR ROLLBACK. NOW THERE IS NO DATA IN THE TABLE. IF I RUN THE SP IMMEDIATE I DONT FIND CONNECTION PROBLEM IN DBLINK AND DATA ARE INSERTED. WHAT COULD BE THE PROBLEM? create or replace sp_dataload begin execute immediate 'TRUNCATE TABLE TABLE1'; INSERT INTO TABLE1 ( ID ,NAME) SELECT * FROM TABLE2@DBLINK; COMMIT; dbms_output.put_line('complete'); Exception when others then dbms_output.put_line("error" || SQLERRm); Rollback; end /
Categories: DBA Blogs

Object Dependency with RPC-Signature Dependency Mode

Tom Kyte - Mon, 2024-02-26 13:46
Dear AskTom team, I am happy that you again available for questions :-) I was studying the 'Database Development Guide - 26 Understanding Schema Object Dependency' and focused on the topic '26.10.2 RPC-Signature Dependency Mode'. There is written: 'Changing the data type of a parameter to another data type in the same class does not change the RPC signature, but changing the data type to a data type in another class does.' After studying I tried it out on LiveSQL. Sadly the dependent object always gets invalid after I changed the parameter of the referenced object to another data type in the same class (eg from 'number' to 'integer') - refer to my LiveSQL link. I tried to understand it but I didn't. What do I wrong here? Or did I got the documentation wrong? Thanks for your support! Greetings, Walter
Categories: DBA Blogs

UTL_FILE.FGETATTR can not find an existing file

Tom Kyte - Mon, 2024-02-26 13:46
I created a text file on oracle database server. The name of the file is 'TestFile' and it is located in C:\TestFolder\TestFile.txt . All C drive and 'TestFolder' folder and 'TestFile.txt' file have full control permission for everyone OS users. I create a directory: Create directory CheckFileExist as 'C:\TestFolder'; I grant read and write permissions on CheckFileExist directory to SYS oracle user: Grant read, write on directory CheckFileExist to SYS; I wrote a query so that Oracle can find the 'TestFile.txt' file or not: Declare V_File_Exists Boolean; V_File_Length Number; V_File_Size Number; Directory_Name Nvarchar2(255):='CheckFileExist'; Begin UTL_FILE.FGETATTR (Directory_Name, 'TestFile', V_File_Exists, V_File_Length, V_File_Size); If V_File_Exists Then DBMS_OUTPUT.PUT_LINE('File exists'); Else DBMS_OUTPUT.PUT_LINE('File does not exist'); End if; End; When I execute the query, the result is that File does not exist. What is the problem?
Categories: DBA Blogs

SQL Server: Manage large data ranges using partitioning

Yann Neuhaus - Mon, 2024-02-26 10:59
Introduction: 

When it comes to moving ranges of data with many rows across different tables in SQL-Server, the partitioning functionality of SQL-Server can provide a good solution for manageability and performance optimizing. In this blog we will look at the different advantages and the concept of partitioning in SQL-Server.

Concept overview: 

In SQL-Server, partitioning can divide the data of an index or table into smaller units. These units are called partitions. For that purpose, every row is assigned to a range and every range in turn is assigned to a specific partition. Practically there are two main components: The partition function and the partition scheme.  

The partition function defines the range borders through boundary values and thus the number of partitions, in consideration with the data values, as well. You can define a partition function either as “range right” or “range left”. The main difference is how the boundary value gets treated. In a range right partition function, the boundary value is the first value of the next partition while in a range left partition function the boundary value is the last value of the previous partition. For example: 

We want to partition a table by year and the datatype of the column where we want to apply the partition function has the datatype “date”. Totally we have entries for the year 2023 and 2024 which means, we want 2x partitions. In a range right function, the boundary value must be the first day of the year 2024 whereas in a range left function the boundary value must be the last day of the year 2023.  

See example below: 

 Partition right  Partition left

The partition scheme is used to map the different partitions, which are defined through the partition function, to multiple or one filegroup.

Main benefits of partitioning:

There are multiple scenarios where performance or manageability of a data model can be increased through partitioning. The main advantage of partitioning is that it reduces the contention on the whole table as a database object and restricts it to the partition level when performing operations on the corresponding data range. Partitioning also facilitates data transfer with the “switch partition” statement, this statement performs a switch-in or switch-out of o whole partition. Through that, a large amount of data can be transferred very quickly.

Demo Lab:

For demo purposes I created the following script, which will create three tables with 5 million rows of historical data from 11 years in the past until today:

USE [master] 
GO 
 
--Create Test Database 
CREATE DATABASE [TestPartition] 
GO 
 
--Change Recovery Model 
ALTER DATABASE [TestPartition] SET RECOVERY SIMPLE WITH NO_WAIT 
GO 
 
--Create Tables 
Use [TestPartition] 
GO 
 
CREATE TABLE [dbo].[Table01_HEAP]( 
[Entry_Datetime] [datetime] NOT NULL, 
[Entry_Text] [nvarchar](50) NULL 
) 
GO 
 
CREATE TABLE [dbo].[Table01_CLUSTEREDINDEX]( 
[Entry_Datetime] [datetime] NOT NULL, 
[Entry_Text] [nvarchar](50) NULL 
) 
GO 
 
CREATE TABLE [dbo].[Table01_PARTITIONED]( 
[Entry_Datetime] [datetime] NOT NULL, 
[Entry_Text] [nvarchar](50) NULL 
) 
GO 
 
--GENERATE DATA 
 
declare @date as datetime 
 
declare @YearSubtract int 
declare @DaySubtract int 
declare @HourSubtract int 
declare @MinuteSubtract int  
declare @SecondSubtract int  
declare @MilliSubtract int 
 
--Specifiy how many Years backwards data should be generated 
declare @YearsBackward int 
set @YearsBackward = 11 
 
--Specifiy how many rows of data should be generated 
declare @rows2generate int 
set @rows2generate = 5000000 
 
 
declare @counter int 
set @counter = 1 
 
--generate data entries 
while @counter <= @rows2generate  
begin 
 
--Year 
Set @YearSubtract = floor(rand() * (@YearsBackward - 0 + 1)) + 0 
--Day 
Set @DaySubtract = floor(rand() * (365 - 0 + 1)) + 0 
--Hour 
Set @HourSubtract = floor(rand() * (24 - 0 + 1)) + 0 
--Minute 
Set @MinuteSubtract = floor(rand() * (60 - 0 + 1)) + 0 
--Second 
Set @SecondSubtract = floor(rand() * (60 - 0 + 1)) + 0 
--Milisecond 
Set @MilliSubtract = floor(rand() * (1000 - 0 + 1)) + 0 
 
 
set @date = Dateadd(YEAR, -@YearSubtract , Getdate()) 
set @date = Dateadd(DAY, -@DaySubtract , @date) 
set @date = Dateadd(HOUR, -@HourSubtract , @date) 
set @date = Dateadd(MINUTE, -@MinuteSubtract , @date) 
set @date = Dateadd(SECOND, -@SecondSubtract , @date) 
set @date = Dateadd(MILLISECOND, @MilliSubtract , @date) 
 
insert into Table01_HEAP (Entry_Datetime, Entry_Text) 
Values (@date, 'This is a entry from ' + convert(nvarchar, @date, 29)) 
 
set @counter = @counter + 1 
 
end 
 
--COPY DATA TO OTHER TABLES 
 
INSERT INTO dbo.Table01_CLUSTEREDINDEX 
  (Entry_Datetime, Entry_Text) 
SELECT Entry_Datetime, Entry_Text 
  FROM Table01_HEAP 
 
INSERT INTO dbo.Table01_PARTITIONED 
  (Entry_Datetime, Entry_Text) 
SELECT Entry_Datetime, Entry_Text 
  FROM Table01_HEAP 
 
--Create Clustered Indexes for dbo.Table01_CLUSTEREDINDEX and dbo.Table01_PARTITIONED 
 
CREATE CLUSTERED INDEX [ClusteredIndex_Table01_CLUSTEREDINDEX] ON [dbo].[Table01_CLUSTEREDINDEX] 
( 
[Entry_Datetime] ASC 
 
) on [PRIMARY] 
GO 
 
CREATE CLUSTERED INDEX [ClusteredIndex_Table01_PARTITIONED] ON [dbo].[Table01_PARTITIONED] 
( 
[Entry_Datetime] ASC 
 
) on [PRIMARY] 
GO 

The tables have the same data in it. The difference between the tables is, that one is a heap, one has clustered index and one has a clustered Index which will be partitioned in the next step:

 Generated data

After the tables are created with the corresponding data and indexes, the partition function and scheme must be created. This was done by the following script:

-- Create Partition Function as range right for every Year -10 Years 
Create Partition Function [myPF01_datetime] (datetime) 
AS Range Right for Values ( 
DATEADD(yy, DATEDIFF(yy, 0, GETDATE()) + 0, 0), DATEADD(yy, DATEDIFF(yy, 0, GETDATE()) - 1, 0), DATEADD(yy, DATEDIFF(yy, 0, GETDATE()) - 2, 0), 
DATEADD(yy, DATEDIFF(yy, 0, GETDATE()) - 3, 0), DATEADD(yy, DATEDIFF(yy, 0, GETDATE()) - 4, 0), DATEADD(yy, DATEDIFF(yy, 0, GETDATE()) - 5, 0),  
DATEADD(yy, DATEDIFF(yy, 0, GETDATE()) - 6, 0), DATEADD(yy, DATEDIFF(yy, 0, GETDATE()) - 7, 0), DATEADD(yy, DATEDIFF(yy, 0, GETDATE()) - 8, 0),  
DATEADD(yy, DATEDIFF(yy, 0, GETDATE()) - 9, 0), DATEADD(yy, DATEDIFF(yy, 0, GETDATE()) - 10, 0) 
); 
GO 
 
-- Create Partition Scheme for Partition Function myPF01_datetime 
CREATE PARTITION SCHEME [myPS01_datetime] 
AS PARTITION myPF01_datetime ALL TO ([PRIMARY]) 
GO 

I have used the DATEADD() function in combination with the DATEDIFF() function to retrieve the first millisecond of the year as datetime data type and that for the last 10 years and used this as range right boundary values. For sure it is also possible to hard code the boundary values like ‘2014-01-01 00:00:00.000’ but I prefer to keep it as dynamically as possible. At the end it is the same result:

 Select Dateadd - function

After creating the partition function, I have created the partition scheme. The partition scheme is mapped to the partition function. In my case I assign every partition to the primary filegroup. It is also possible to split the partitions across multiple filegroups.

As far as the partition function and scheme are created successfully it can be applied to the existing table: Table01_PARTITIONED. For achieving that, the clustered index of the table must be recreated on the partition scheme instead of the primary filegroup: 

-- Apply partitiononing on Table: Table01_PARTITIONED through recreating the Tables Clustered Index ClusteredIndex_Table01_PARTITIONED on Partition Scheme myPS01_datetime 
CREATE CLUSTERED INDEX [ClusteredIndex_Table01_PARTITIONED] ON [dbo].[Table01_PARTITIONED] 
( 
[Entry_Datetime] ASC 
 
) with (DROP_EXISTING = ON) on myPS01_datetime(Entry_Datetime);  
GO 

After doing that, the Table Table01_PARTITIONED has multiple partitions while the other tables have still only one partition: 

 Partitions of partitioned table  Partitions of clustered index table

There are at all 12 partitions for every year between 2014 and 2024 as well as one for every entry which has an earlier datetime than 2014-01-01 00:00:00.000 and one for every entry that has a later datetime value than 2024-01-01 00:00:00.000 while partition nr. 1 has the earliest data and partition nr. 12 has the latest data in it. See below: 

 Content of partition 1  content of partition 12 DEMO Tests:

First, I want to compare the performance when moving outdated data, which is older than 2014-01-01 00:00:00.000, from the table itself to a history table. For that purpose, I created a history table with the same data structure as the table Table01_CLUSTEREDINDEX:

Use [TestPartition] 
GO 
 
--Create History Table 
CREATE TABLE [dbo].[Table01_HISTORY01]( 
[Entry_Datetime] [datetime] NOT NULL, 
[Entry_Text] [nvarchar](50) NULL 
) 
GO 
 
--Create Clustered Indexes for dbo.Table01_HISTORY01 
CREATE CLUSTERED INDEX [ClusteredIndex_Table01_HISTORY01] ON [dbo].[Table01_HISTORY01] 
( 
[Entry_Datetime] ASC 
 
) on [PRIMARY] 
GO 

I am starting first with the table with the clustered index with a classic “insert into select” statement:

 Select insert data into history

We can see that we have 10932 reads in total and a total query run time of 761 milliseconds.

 Execution plan select insert

In the execution plan, we can see that a classical Index seek operation occurred. Which means, the database engine seeked for every row which has a datetime value previous to 2014-01-01 00:00:00.000 and wrote it into the history table.

For the delete operation we can see similar results:

 delete rows  Delete rows

Totally 785099 rows where moved and we have in the table Table01_CLUSTEREDINDEX no older entries than 2014-01-01 00:00:00.000 anymore:

 Verify table content

Next let us compare the data movement when using a “switch partition” statement. For switching a partition from a partitioned source table to a nonpartitioned destination table, we need to use the partition number of the source table. For that I run the following query:

 Switch partition

We can see that the partition number 1 was moved within 2 milliseconds. Compared to the previous query where it took 761 milliseconds for inserting the data and an additional 596 milliseconds for deleting the data, the switch partition operation is obviously much faster. But why is this the case? – that’s because switching partitions is a metadata operation. It does not seeking through an index (or even worse – scanning a table) and write every row one by one, instead it changes the metadata of the partition and remaps the partition to the target table. 

And as we can see, we have the same result:

 verify table content

Another big advantage is when it comes to deleting a whole data range. For example: Let us delete the entries of the year 2017 – we do not need them anymore.

For the table with the clustered Index, we must use a statement like this:

 delete operation

We can see that we have here a query runtime of 355 milliseconds and 68351 page reads in total for the delete operation with the clustered index. 

For the partitioned table instead, we can use a truncate operation on the specific partition. That’s because the partition is treated as a own physical unit and can for that be truncated.

And as we should know: Truncating is much faster, because this operation is deallocating the pages and writes only one transaction log entry for the page deallocation while a delete operation is going row by row and writes every row deletion in the transaction log.

So, let us try: The year 2017 is 7 years back so let us verify, that the right data range will be deleted:

 verify partition content

We can see with the short query above: 7 Years back, that would be the partition nr. 5 and the data range seems to be right.  So, let us truncate:

 Truncate partition

And we can see to truncate all the entries from the year 2017, the database engine took 1 millisecond compared to the 355 seconds for the delete operation again much faster.

Next: let’s see, how we can change the lock behavior of SQL-Server through partitioning. For that I ran the following update query for updating every entry text for dates which are younger than May 2018:

 Update data entries

While the update operation above was running, I queried the DMV sys.dm_tran_locks in another session for checking the locks my update operation above is holding: 

 lock contention

And we can see that we have a lot of page locks and also an exclusive lock on the object itself (in this case the Table01_HEAP).  That is because of SQL-Servers lock escalation behavior.

I ran the same update operation on the partitioned table but before I changed the lock escalation setting of the table from default value “table” to “auto”. This is necessary for enable locking on partition level: 

 Update lock escalation

And when I’m querying the dmv again while the update operation above is running, I get the following result:

 lock contention

We can see that we have no exclusive look on abject level anymore, we have an intended exclusive look, which will not prevent other transactions from accessing the data (as far as it has no other look on a more granular level). Instead, we have multiple exclusive looks on multiple resources called HOBT. And when we take a look at the “resource_associated_entity_id” and using them for querying the sys.partitions table, we can see the following information’s: 

 locked partitions

These resources locked through the update operation on the partitioned table are the partitions associated with the table. So, SQL-Server locked the partitions instead of locking the whole table. This has the advantage that locking happens in a more granular context which prevents lock contention on the table itself. 

Conclusion:

Partitioning can be a very powerful and useful functionality in SQL-Server when used in an appropriate situation. Especially when it comes to regular operations on whole data ranges, partitioning can be used for enhancing performance and manageability. With partitioning, it’s also possible to distribute the data of a table over multiple files groups. Additionally with splitting and merging partitions it’s possible to maintain partitions for growing or shrinking data.

L’article SQL Server: Manage large data ranges using partitioning est apparu en premier sur dbi Blog.

Kubernetes Networking by Using Cilium – Intermediate Level – Traditional Linux Routing

Yann Neuhaus - Mon, 2024-02-26 02:34

Welcome back in this blog post series about Kubernetes Networking by using Cilium. In the previous post about network interfaces, we’ve looked at how we can identify all the interfaces that will be involved in the routing between pods. I’ve also explained the routing in a Kubernetes cluster with Cilium in a non technical language in this blog post. Let’s now see it into actions for the techies!

Below is the drawing of where we left off:

We will continue to use the same method, you are the packet that will travel from your apartment (pod) 10.10.2.117 on the top left to other pods in this Kubernetes cluster. But first, let’s take this opportunity to talk about namespace and enrich our drawing with a new analogy.

Routing between namespaces

A namespace is a logical group of objects that provide isolation in the cluster. However, by default, all pods can communicate together in a “vanilla” Kubernetes. Whatever they belong to the same namespace or not. So this isolation provided by namespace doesn’t mean the pods can’t communicate together. To allow or deny such communication, you will need to create network policies. That could be the topic for another blog post!

We can use the analogy of a namespace being the same floor number of all building of our cluster. All apartments on the same floor in each building will be logically grouped into the same namespace. This is what we can see below in our namespace called networking101:

$ kubectl get po -n networking101 -owide
NAME                        READY   STATUS    RESTARTS       AGE    IP            NODE                NOMINATED NODE   READINESS GATES
busybox-c8bbbbb84-fmhwc     1/1     Running   1 (125m ago)   4d1h   10.10.1.164   mycluster-worker2   <none>           <none>
busybox-c8bbbbb84-t6ggh     1/1     Running   1 (125m ago)   4d1h   10.10.2.117   mycluster-worker    <none>           <none>
netshoot-7d996d7884-fwt8z   1/1     Running   0              103m   10.10.2.121   mycluster-worker    <none>           <none>
netshoot-7d996d7884-gcxrm   1/1     Running   0              103m   10.10.1.155   mycluster-worker2   <none>           <none>

That’s our 4 apartments / pods on the same floor, grouped together in one namespace:

The routing process doesn’t care about the pod’s namespace, only its destination IP Address will be used. Let’s now see how we can go from the apartment 10.10.2.117 to the apartment 10.10.2.121 in the same building (node).

Pod to pod routing on the same node

From the pod 10.10.2.117, you’ve then decided to go to pay a visit to 10.10.2.121. You first look at the routing table in order to know how to reach this destination. But you can’t go out if you don’t also have the MAC Address of your destination. You need both destination information (IP Address and MAC Address) before you can start to travel. You then look at the ARP table to find out this information. The ARP table contains the known mapping of a MAC Address to an IP Address in your IP subnet. If it is not there, you send first a scout to knock at the door of each apartment in your community until you find the MAC Address of your destination. This is called the ARP request. When the scout comes back with that information, you write it into the ARP table. You thank the scout for his help and are now ready to start your travel by exiting the pod.

Let’s see how we can trace this in our source pod 10.10.2.117

$ kubectl exec -it -n networking101 busybox-c8bbbbb84-t6ggh -- ip route
default via 10.10.2.205 dev eth0
10.10.2.205 dev eth0 scope link

Very simple routing instruction! For every destination, you go through 10.10.2.205 by using your only network interface eth0 in the pod. You can see from the drawing above that 10.10.2.205 is the IP Address of the cilium_host. You then check your ARP table:

$ kubectl exec -it -n networking101 busybox-c8bbbbb84-t6ggh -- arp -a

The arp -a command list the content of the ARP table and we can see there is nothing in there.

A way to send a scout out is by using the arping tool toward our destination. You may have noticed that for my pods I’m using busybox and netshoot images. Both provide networking tools that are useful for troubleshooting:

$ kubectl exec -it -n networking101 busybox-c8bbbbb84-t6ggh -- arping 10.10.2.121
ARPING 10.10.2.121 from 10.10.2.117 eth0
Unicast reply from 10.10.2.121 [d6:21:74:eb:67:6b] 0.028ms
Unicast reply from 10.10.2.121 [d6:21:74:eb:67:6b] 0.092ms
Unicast reply from 10.10.2.121 [d6:21:74:eb:67:6b] 0.123ms
^CSent 3 probe(s) (1 broadcast(s))
Received 3 response(s) (0 request(s), 0 broadcast(s))

We now have the piece of information that was missing, the MAC address of our destination. We can then just check it is written into our ARP table of our source pod:

$ kubectl exec -it -n networking101 busybox-c8bbbbb84-t6ggh -- arp -a
? (10.10.2.205) at d6:21:74:eb:67:6b [ether]  on eth0

Here it is! However you may wonder why we don’t see here the IP Address of our destination 10.10.2.121 right? In traditional networking this is what you will see but here we are in a Kubernetes cluster and we are using Cilium that is taking care of the networking in it. Also we have seen above from the routing table of the source pod that for every destination we go to this cilium_host interface.

So the cilium_host on that node is attracting all the traffic even for communication between pods in the same IP subnet.

As a side note, below is a command where you can quickly display all the IP Addresses of the cilium_host and the nodes in your cluster in one shot:

$ kubectl get ciliumnodes
NAME                      CILIUMINTERNALIP   INTERNALIP   AGE
mycluster-control-plane   10.10.0.54         172.18.0.3   122d
mycluster-worker          10.10.2.205        172.18.0.2   122d
mycluster-worker2         10.10.1.55        172.18.0.4   122d

In traditional networking, doing L2 switching, the MAC Address of the destination is the one related to the destination IP Address. That is not the case here in Kubernetes networking. So which interface has the MAC Address d6:21:74:eb:67:6b ? Let’s respond to that question immediately:

$ sudo docker exec -it mycluster-worker ip a | grep -iB1 d6:21:74:eb:67:6b
9: lxc4a891387ff1a@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d6:21:74:eb:67:6b brd ff:ff:ff:ff:ff:ff link-netns cni-67a5da05-a221-ade5-08dc-64808339ad05

That is the LXC interface of the node as it is indeed our next step from the source pod to reach our destination. You’ve learned from my first post blog of this networking series that there is a servant waiting here at the LXC interface to direct us toward our destination.

From there, we don’t see much of the travel to the destination from the traditional Linux routing point of view. This is because the routing is done by the Cilium agent using eBPF. As the destination is in the same IP subnet as the source, the Cilium agent just switch it directly to the destination LXC interface and then reach the destination pod.

When the destination pod responds to the source, the same process occurs and for the sake of completeness let’s look at the routing table and ARP table in the destination pod:

$ kubectl exec -it -n networking101 netshoot-7d996d7884-fwt8z -- ip route
default via 10.10.2.205 dev eth0 mtu 1450
10.10.2.205 dev eth0 scope link

$ kubectl exec -it -n networking101 netshoot-7d996d7884-fwt8z -- arp -a
? (10.10.2.205) at 92:65:df:09:dd:28 [ether]  on eth0

$ sudo docker exec -it mycluster-worker ip a | grep -iB1 92:65:df:09:dd:28
13: lxce84a702bb02c@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 92:65:df:09:dd:28 brd ff:ff:ff:ff:ff:ff link-netns cni-d259ef79-a81c-eba6-1255-6e46b8d1c779

So from the traditional Linux routing point of view, everything goes to the cilium_host and the destination MAC address is the LXC interface of the node that is linked to our pod. This is exactly the same we have seen with our source pod.

Pod to pod routing on a different node

Let’s now have a look at how we could reach the pod 10.10.1.155 from the source pod 10.10.2.117 which is hosted in another node. The routing is the same at the beginning but when talking to the servant at the LXC interface, he sees that the destination IP Address doesn’t belong to the same IP subnet and so directs us to the cilium_host in the Lobby. From there we are routed to the cilium_vxlan interface to reach the node that host our destination pod.

Let’s now have a look at the routing table of the host:

$ sudo docker exec -it mycluster-worker ip route
default via 172.18.0.1 dev eth0
10.10.0.0/24 via 10.10.2.205 dev cilium_host proto kernel src 10.10.2.205 mtu 1450
10.10.1.0/24 via 10.10.2.205 dev cilium_host proto kernel src 10.10.2.205 mtu 1450
10.10.2.0/24 via 10.10.2.205 dev cilium_host proto kernel src 10.10.2.205
10.10.2.205 dev cilium_host proto kernel scope link
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.2

We don’t see much here as the routing is using eBPF and is managed by the Cilium agent as we’ve seen before.

As a side note and to share everything with you, the output of the network interfaces as well as the ip route in the Cilium agent pod is identical to the one of the node. This is because at startup the Cilium agent provides these information to the node. You can check the Cilium agent with the following commands:

$ kubectl exec -it -n kube-system cilium-dprvh -- ip a
$ kubectl exec -it -n kube-system cilium-dprvh -- ip route

So you go through the VXLAN tunnel and you reach the node mycluster-worker2. Here is the routing table of this node:

$ sudo docker exec -it mycluster-worker2 ip route
default via 172.18.0.1 dev eth0
10.10.0.0/24 via 10.10.1.55 dev cilium_host proto kernel src 10.10.1.55 mtu 1450
10.10.1.0/24 via 10.10.1.55 dev cilium_host proto kernel src 10.10.1.55
10.10.1.55 dev cilium_host proto kernel scope link
10.10.2.0/24 via 10.10.1.55 dev cilium_host proto kernel src 10.10.1.55 mtu 1450
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.4

Again from the traditional Linux routing point of view there isn’t much to see, except that all the traffic for the pods subnet are going to the cilium_host that is managed by the Cilium agent. This is identical as what we’ve learned in the other node. When we reach the cilium_vxlan interface, a servant is waiting for us with his magical eBPF map and directs us through a secret passage to the LXC corridor interface of the top left pod where we can reach our destination.

Wrap up

We’ve explored all that can be seen in routing from the traditional Linux point of view by using the common networking tools.

Maybe you feel frustrated to not understand it completely because there are some gaps in this step-by-step packet routing? Cilium uses eBPF for routing the packets so it adds some complexity to the routing understanding. However it is much faster than the traditional Linux routing due to the secret passages opened by the eBPF servants.

If you want to know more about this, don’t miss my next blog post where I’ll dive deep into the meanders of eBPF routing. See you there!

L’article Kubernetes Networking by Using Cilium – Intermediate Level – Traditional Linux Routing est apparu en premier sur dbi Blog.

LLM Agents with Sparrow

Andrejus Baranovski - Mon, 2024-02-26 01:53
I explain new functionality in Sparrow - LLM agents support. This means you can implement independently running agents, and invoke them from CLI or API. This makes it easier to run various LLM related processing within Sparrow. 

 

Predictive Analytics and AI/ML

Dylan's BI Notes - Sun, 2024-02-25 16:24
We have heard about advanced analytics, which was described by dividing analytics into three main types: While many articles explore these concepts,, such as Prescriptive vs. Predictive Analytics: Examples & Use Cases I aim to approach these categories from the standpoint of a software vendor, describe how these categories relate to machine learning practices. The […]
Categories: BI & Warehousing

Physical Online Migration to ExaCC with Oracle Zero Downtime Migration (ZDM)

Yann Neuhaus - Sun, 2024-02-25 14:06

A while ago I had been testing and blogging about ZDM, see my previous articles. And I finally had the chance to implement it at one of our customer to migrate on-premises database to Exadata Cloud @Customer. After having been implementing Logical Offline migration with ZDM, see my previous article, https://www.dbi-services.com/blog/logical-offline-migration-to-exacc-with-oracle-zero-downtime-migration-zdm/, I had the opportunity to implement a Physical Online Migration and testing it with one database taken as pilot. In this article I would like to share with you my experience I could get from migrating an on-premises database to ExaCC using ZDM Physical Online Migration. This method will use Data Guard and we can then only use it to migrate Oracle Enterprise Edition databases. We call it Physical Online, because ZDM will create a Standby database either through a backup or with Direct Data Transfer (active duplication with rman or restore from service) and synchronise it with the primary. During all the preparation the database is still available for the application, and the maintenance windows will be shorter with less downtime. It will just be needed for the switchover operation. Of course ZDM can include non-cdb to pdb conversion which will make it a little bit longer. The Physical Online is the only ZDM method including fallback. We intended to use this method at customer side, as mandatory one for Large Oracle EE databases, and preferred one for Small Oracle EE databases.

Read more: Physical Online Migration to ExaCC with Oracle Zero Downtime Migration (ZDM) Introduction

The on-premise databases are single-tenant (non-cdb) database running for all version 19.10.

The target databases are Oracle RAC databases running on ExaCC with Oracle version 19.21.

The Oracle Net port used on the on-premise site is 13000 and the Oracle Net port used on the ExaCC is 1521.

We will use ZDM to migrate the on-premise single-tenant database, to a PDB within a CDB. ZDM will then be in charge of migrating the database to the exacc using Data Guard, run datapatch, convert non-cdb database to pdb within a target cdb, upgrade Time Zone. The creation of the standby database will be done through a direct connection. Without any backup.

Of course I have anonymised all outputs to remove customer infrastructure names. So let’s take following convention.

ExaCC Cluster 01 node 01 : ExaCC-cl01n1
ExaCC Cluster 01 node 02 : ExaCC-cl01n2
On premises Source Host : vmonpr
Target db_unique_name on the ExaCC : ONPR_RZ2
Database Name to migrate : ONPR
ZDM Host : zdmhost
ZDM user : zdmuser
Domain : domain.com
ExaCC PDB to migrate to : ONPRZ_APP_001T

We will then migrate on-premise Single-Tenant database, named ONPR, to a PDB on the ExaCC. The PDB will be named ONPRZ_APP_001T.

Ports

It is important to mention that following ports are needed:

SourceDestinationPort ZDM HostOn-premise Host22 ZDM HostExaCC VM (both nodes)22 On-premise HostExaCC VM (scan listener and vip)Oracle Net (1521) ExaCCOn-premise HostOracle Net

If Oracle Net ports are for example not opened from the Exacc to the on-premise host, the migration evaluation will immediately stopped at one of the first steps named ZDM_PRECHECKS_TGT, and following errors will be found in the log file:

PRGZ-1132 : -eval failed for the phase ZDM_PRECHECKS_TGT with exception
PRGZ-3176 : a database connection cannot be established from target node ExaCC-cl01n1 to source node vmonpr
PRCC-1021 : One or more of the submitted commands did not execute successfully.
PRCZ-2103 : Failed to execute command "/u02/app/oracle/product/19.0.0.0/dbhome_2/bin/tnsping" on node "ExaCC-cl01n1" as user "root". Detailed error:
TNS Ping Utility for Linux: Version 19.0.0.0.0 - Production on 06-FEB-2024 10:06:37

Copyright (c) 1997, 2023, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=tcp)(HOST=X.X.X.15)(PORT=13000)))
TNS-12535: TNS:operation timed out
PRCC-1025 : Command submitted on node ExaCC-cl01n2 timed out after 60 seconds..

If Oracle Net ports are for example not opened from the on-premise host to ExaCC, the migration evaluation will immediately stopped at one of the other steps named ZDM_PRECHECKS_SRC, and following errors will be found in the log file:

PRGZ-1132 : -eval failed for the phase ZDM_PRECHECKS_SRC with exception
PRGZ-3130 : failed to establish connection to target listener from nodes [vmonpr]
PRCC-1021 : One or more of the submitted commands did not execute successfully.
PRCC-1025 : Command submitted on node vmonpr timed out after 15 seconds...

Requirements… To be known before starting

There are few requirements that are needed.

ZDM Host
  • SSH connection allowed between Source and Target host
  • SSH authentication key pairs without passphrase should be established and tested for the user between ZDM host and both source and target database
Source Database
  • Both source and target need to be on the same release version.
  • Transparent Data Encryption (TDE) wallet must be configured (even if source database is not encrypted)
  • WALLET_TYPE should be set to either AUTOLOGIN or PASSWORD
  • Wallet STATUS should be OPEN
  • Wallet should be opened on all PDB in case the source is a container Database
  • The master key must be set for all the PDB and the container database
  • In case the source database is a RAC database, SNAPSHOT CONTROLFILE must be configured on a shared location on all cluster nodes
  • SCAN listener/listener connections allowed on both source and target DB
  • DB_UNIQUE_NAME parameter must be different than target database
  • SYSPASSWORD must be the same on the source and target database
Target Database
  • Database must be created prior the migration
  • Database release version should match source version.
  • The target database patch level should also be the same or higher than the source database. In case the target database patch level is higher, ZDM can be configured to run datapatch on the target database. Target database patch level can not be lower than source database.
  • For Oracle RAC databases, SSH connectivity between nodes for the oracle user should be setup
  • Storage size should be sufficient (same as source database)
  • DB_NAME parameter must be the same than the source database
  • DB_UNIQUE_NAME parameter must be different than the one on the source database
  • Automatic backups should be disabled (for ExaC@C section configure backups, option backup destination, none should be selected)
  • TDE should be activated
  • Wallet should be open and WALLET_TYPE should be set to either AUTOLOGIN or PASSWORD
  • SYSPASSWORD must be the same on the source and target database
Others
  • Ensure that all ports have been opened.
  • Oracle NET Services should be configured and tested on both source and target database for Data Guard synchronisation and deployment of the standby database with active duplication
  • We will need to go through a temporary multitenant database on the ExaCC which will have same DB_NAME than the source and different DB_UNIQUE_NAME. This CDB will host the final PDB.
  • The final PDB can then be relocated to the appropriate final CDB on the ExaCC laster on.
  • ZDM will create its own temporary database with DB_NAME as source database and DB_UNIQUE_NAME as final PDB name to build the Data Guard and will remove it during cleanup phase
Prepare ZDM Physical Online Response file

We will prepare the ZDM response file that will be used, by copying the template provided by ZDM:

[zdmuser@zdmhost migration]$ cp -p /u01/app/oracle/product/zdm/rhp/zdm/template/zdm_template.rsp ./zdm_ONPR_physical_online.rsp

The main parameters to take care of are :

ParameterExplanation TGT_DB_UNIQUE_NAMETarget database DB_UNIQUE_NAME.
For Cloud type Exadata Cloud at Customer (EXACC) Gen2, Exadata Cloud Service (EXACS)
• db_name – The target database db_name should be the same as the source database db_name
• db_unique_name: The target database db_unique_name parameter value must be unique to ensure that Oracle Data Guard can identify the target as a different database from the source database MIGRATION_METHODSpecifies if the migration will uses Oracle Data Guard (online) or backup and restore (offline). We are here using online migration so parameter will need to be setup with ONLINE_PHYSICAL value. DATA_TRANSFER_MEDIUMSpecifies the media used to create the standby database either through a backup using NFS or ZDLRA for example or a direct connection where the standby will be instantiated directly from source using SQL*Net connectivity (duplicate from active or restore from service).
Choose DIRECT as we are doing Physical Direct Data Transfer PLATFORM_TYPETarget Platform Type.
To be EXACC in our case. SRC_PDB_NAMESource database PDB Name.
Not needed here as all our on-premises database are Single Tenant. SRC_DB_LISTENER_PORTTo be used when there is Standalone Database (no Grid Infrastructure) configured with non-default SCAN listener port other than 1521.
To be 13000 in our case. NONCDBTOPDB_CONVERSIONSpecifies to convert a non-CDB source to PDB.
To be TRUE as we wan to convert our on-premises database to PDB during ZDM migration. NONCDBTOPDB_SWITCHOVERFor a physical migration using Data Guard switchover, indicates whether the switchover operations will be executed during a migration job with non-CDB to PDB conversion enabled.
Default is TRUE, to be kept as TRUE in our case. SKIP_FALLBACKIf setup to FALSE, the redo logs will be shipped from the new primary (ExaCC) once the switchover is completed, to the standby (on-premise database) in case of fallback is needed.
To be FALSE as we want fallback. TGT_RETAIN_DB_UNIQUE_NAMEAllow to add a new phase ZDM_RETAIN_DBUNIQUENAME_TGT (can also be ZDM_MODIFY_DBUNIQUENAME_TGT). Need to pause before this phase to keep the ZDM temporary database after the PDB conversion and in case fallback is needed. Resume the job once all is ok and the fallback will need to be removed. TGT_SKIP_DATAPATCHIf set to FALSE ZDM will run datapatch on the target database as part of the post-migration tasks. Useful in case Target patch is in a higher version than source patch.
To be FALSE as we target version is higher than source and we want ZDM to run datapatch. SHUTDOWN_SRCSpecifies to shutdown or not the source database after the migration completes.
To be FALSE. SRC_RMAN_CHANNELSNumber of RMAN channel on the source TGT_RMAN_CHANNELSNumber of RMAN channel on the destination ZDM_SKIP_DG_CONFIG_CLEANUPIf FALSE ZDM will deconfigure DataGuard parameters configured for migration on the source and target database at the end of the migration. ZDM_RMAN_DIRECT_METHODRMAN method to use for ONLINE_PHYISCAL direct data transfer, either using RMAN active duplicate or restore from service.
We kept default RESTORE FROM SERVICE. ZDM_USE_DG_BROKERIf TRUE ZDM will use Data Guard Broker for managing Data Guard configuration.
To be TRUE. ZDM_NONCDBTOPDB_PDB_NAMEWhen migrating non-CDB source to CDB target as a PDB, the PDB name to be used. ZDM_TGT_UPGRADE_TIMEZONEUpgrade target database time zone. Will required downtime for the database
To be TRUE ZDM_APPLY_LAG_MONITORING_INTERVALApply lag monitoring interval to verify both source and target for switchover ready.
Keep NONE.

Note that there is no parameter for the listener port on the target (ExaCC) so assuming this is hard coded to use default 1521 port.

Also note that as we configured SHUTDOWN_SRC as FALSE, additionnal steps will be required to ensure that application do not use the SOURCE (on-premise) database any more.

Updated ZDM response file compared to ZDM template for the migration we are going to run:

[zdmuser@zdmhost migration]$ diff zdm_ONPR_physical_online.rsp /u01/app/oracle/product/zdm/rhp/zdm/template/zdm_template.rsp
24c24
< TGT_DB_UNIQUE_NAME=ONPR_RZ2
---
> TGT_DB_UNIQUE_NAME=
32c32
< MIGRATION_METHOD=ONLINE_PHYSICAL
---
> MIGRATION_METHOD=
63c63
< DATA_TRANSFER_MEDIUM=DIRECT
---
> DATA_TRANSFER_MEDIUM=
75c75
< PLATFORM_TYPE=EXACC
---
> PLATFORM_TYPE=
119c119
< SRC_DB_LISTENER_PORT=13000
---
> SRC_DB_LISTENER_PORT=
230c230
< NONCDBTOPDB_CONVERSION=TRUE
---
> NONCDBTOPDB_CONVERSION=FALSE
252c252
< SKIP_FALLBACK=FALSE
---
> SKIP_FALLBACK=
268c268
< TGT_RETAIN_DB_UNIQUE_NAME=TRUE
---
> TGT_RETAIN_DB_UNIQUE_NAME=
312c312
< SHUTDOWN_SRC=FALSE
---
> SHUTDOWN_SRC=
333c333
< SRC_RMAN_CHANNELS=3
---
> SRC_RMAN_CHANNELS=
340c340
< TGT_RMAN_CHANNELS=6
---
> TGT_RMAN_CHANNELS=
526c526
< ZDM_USE_DG_BROKER=TRUE
---
> ZDM_USE_DG_BROKER=
574c574
< ZDM_NONCDBTOPDB_PDB_NAME=ONPRZ_APP_001T
---
> ZDM_NONCDBTOPDB_PDB_NAME=
595c595
< ZDM_TGT_UPGRADE_TIMEZONE=TRUE
---
> ZDM_TGT_UPGRADE_TIMEZONE=FALSE

ZDM Build Version

We are using ZDM build version 21.4:

[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli -build
version: 21.0.0.0.0
full version: 21.4.0.0.0
patch version: 21.4.1.0.0
label date: 221207.25
ZDM kit build date: Jul 31 2023 14:24:25 UTC
CPAT build version: 23.7.0

Passwordless Login

Passwordless Login needs to be configured between ZDM Host, the Source Host and Target Host. See my previous blog : https://www.dbi-services.com/blog/oracle-zdm-migration-java-security-invalidkeyexception-invalid-key-format/

If Passwordless Login is not configured with one node, you will see such error in the log file during migration evaluation:

PRCZ-2006 : Unable to establish SSH connection to node "ExaCC-cl01n2" to execute command "/u02/app/oracle/product/19.0.0.0/dbhome_2/bin/tnsping vmonpr:13000"
Creation of the target database

As explained in the requirements, we must create a target CDB on the ExaCC with same DB_NAME as the source database to be migrated but other DB_UNIQUE_NAME. In our case it will be ONPR for the DB_NAME and ONPR_RZ2 for the DB_UNIQUE_NAME. This database must exist before the migration is started with ZDM. ZDM will create another temporary database taking the final PDB name and will use this target CDB as a template.

TDE (Transparent Data Encryption) configuration

The source database doesn’t need to be encrypted. The target database will be encrypted in any case. ZDM supports the migration of an encrypted and non-encrypted source database. The target database encryption will be taken in account during migration process. Even if the source database is not encrypted a TDE wallet still needs to be configured prior the migration as ZDM will use it to encrypt data to the target.

We need to note that a downtime is needed to reboot the database when wallet_root parameter needs to be configured.

Also until the migration is completed it is more than recommended that the wallet is part of the backup strategy.

Configure instance parameter

Check that the WALLET directory exits otherwise create it:

SQL> !ls /u00/app/oracle/admin/ONPR/wallet
ls: cannot access /u00/app/oracle/admin/ONPR/wallet: No such file or directory

SQL> !mkdir /u00/app/oracle/admin/ONPR/wallet

Configure instance parameter for the database wallet and restart the database:

SQL> alter system set WALLET_ROOT='/u00/app/oracle/admin/ONPR/wallet' scope=spfile;

SQL> shutdown immediate

SQL> startup

Check the wallet. No WRL_PARAMETER should be displayed. WALLET_TYPE should be unknown and STATUS not_available.

SQL> select WRL_PARAMETER, WRL_TYPE,WALLET_TYPE, status from V$ENCRYPTION_WALLET;

WRL_PARAMETER                  WRL_TYPE             WALLET_TYPE          STATUS
------------------------------ -------------------- -------------------- ---------------------------
                               FILE                 UNKNOWN              NOT_AVAILABLE

Configure TDE:

SQL> alter system set tde_configuration='keystore_configuration=FILE' scope=both;

System altered.

Check the Wallet. WRL_PARAMETER should be displayed with the wallet location. WALLET_TYPE should still be unknown and STATUS not_available.

SQL> select WRL_PARAMETER, WRL_TYPE,WALLET_TYPE, status from V$ENCRYPTION_WALLET;

WRL_PARAMETER                            WRL_TYPE             WALLET_TYPE          STATUS
---------------------------------------- -------------------- -------------------- -----------------
/u00/app/oracle/admin/ONPR/wallet/tde/   FILE                 UNKNOWN              NOT_AVAILABLE

Create keystore

Create the keystore using appropriate ExaCC password. We recommend to use the same one for source and target, albeit they can be different.

SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE IDENTIFIED BY "*********************"

keystore altered.

We now can see that we have a wallet file in the TDE wallet directory:

SQL> !ls -ltrh /u00/app/oracle/admin/ONPR/wallet/tde/
total 4.0K
-rw-------. 1 oracle dba 2.5K Feb  9 16:11 ewallet.p12

And if we check the wallet status, we can see it is still UNKNOWN for the WALLET_TYPE, but now STATUS is set to CLOSED.

SQL> select WRL_PARAMETER, WRL_TYPE,WALLET_TYPE, status from V$ENCRYPTION_WALLET;

WRL_PARAMETER                            WRL_TYPE             WALLET_TYPE          STATUS
---------------------------------------- -------------------- -------------------- ------------------------------
/u00/app/oracle/admin/ONPR/wallet/tde/   FILE                 UNKNOWN              CLOSED

Open the keystore

The keystore can now be opened.

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "***************";

keystore altered.

And the wallet type is now set to PASSWORD and status is OPEN_NO_MASTER_KEY.

SQL> select WRL_PARAMETER, WRL_TYPE,WALLET_TYPE, status from V$ENCRYPTION_WALLET;

WRL_PARAMETER                            WRL_TYPE             WALLET_TYPE          STATUS
---------------------------------------- -------------------- -------------------- ------------------------------
/u00/app/oracle/admin/ONPR/wallet/tde/   FILE                 PASSWORD             OPEN_NO_MASTER_KEY

Create and activate the master encryption key

Using same password we can now create and activate the master encryption key using backup option. This will set the database encryption key into the wallet.

SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY "*************" with backup;

keystore altered.

If you are running version 19.10, as we are here, you will face following error:

ORA-28374: typed master key not found in wallet

This is related to following bug:

Bug 31500699 – ORA-28374: typed master key not found in wallet after tablespace TDE Enabled ( Doc ID 31500699.8 )

This is not an issue, we can move forward as the master encryption key has been anyhow created and added in the wallet. The only problem would be that it will impossible to encrypt any data. We do not care as we are not encrypting the source on-premise database and we should not, as we are not licensed with Oracle Advanced Security.

We now have a new wallet and a backup one:

SQL> !ls -ltrh /u00/app/oracle/admin/ONPR/wallet/tde/
total 8.0K
-rw-------. 1 oracle dba 2.5K Feb  9 16:16 ewallet_2024020915161059.p12
-rw-------. 1 oracle dba 4.0K Feb  9 16:16 ewallet.p12

Set autologin Wallet

We will change the wallet type from password to autologin using the same password, in order for the wallet to be opened automatically.

SQL> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '/u00/app/oracle/admin/ONPR/wallet/tde/' IDENTIFIED BY "************";

keystore altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE IDENTIFIED BY "******************";

keystore altered.

And we can check that all has been configured appropriately for the wallet:

SQL> select WRL_PARAMETER, WRL_TYPE,WALLET_TYPE, status from V$ENCRYPTION_WALLET;

WRL_PARAMETER                            WRL_TYPE             WALLET_TYPE          STATUS
---------------------------------------- -------------------- -------------------- ------------------------------
/u00/app/oracle/admin/ONPR/wallet/tde/   FILE                 AUTOLOGIN            OPEN

And we can see that we have now an autologin cwallet.sso file:

SQL> !ls -ltrh /u00/app/oracle/admin/ONPR/wallet/tde/
total 14K
-rw-------. 1 oracle dba 4.0K Feb  9 17:07 ewallet.p12
-rw-------. 1 oracle dba 5.7K Feb  9 17:07 cwallet.sso
-rw-------. 1 oracle dba 2.5K Feb  9 16:16 ewallet_2024020915161059.p12

Change tablespace_encryption instance parameter to decrypt_only

This is mandatory not to encrypt any new tablespace created on the source database once TDE is configured, otherwise Oracle Advanced Security License will be activated. Also after ZDM switchover steps will be completed, the primary database will be running on the ExaCC, and tablespaces will be encrypted on the ExaCC. The generated redo logs that need to be applied on the standby database running on on-premise will be encrypted as well. Knowing we are not licensed with Oracle Advanced Security on on-premise, we also need to have this parameter set to decrypt_only to be able to decrypt the redo before applying them on the source database.

Unfortunately this parameter came only with Oracle 19.16 version. So if you are running an older version on the source on-premise database, as we are, you do not have the possibility to use this parameter.

This means that we/you will need to:

  • Deactivate fallback possibility. We will only be able, in our situation, to use ZDM to migrate the database but without any fallback possibilities. Bad situation…
  • We will need to ensure the parameter ENCRYPT_NEW_TABLESPACES is set to DDL and ensure no ENCRYPTION clause is specified in the statement for any new created tablespace
SQL> show parameter encry

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
encrypt_new_tablespaces              string      CLOUD_ONLY
tablespace_encryption                string      MANUAL_ENABLE

SQL> alter system set tablespace_encryption='decrypt_only' scope=spfile;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 3.7581E+10 bytes
Fixed Size                 23061704 bytes
Variable Size            5100273664 bytes
Database Buffers         3.2346E+10 bytes
Redo Buffers              111153152 bytes
Database mounted.
Database opened.

SQL> show parameter encry

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
encrypt_new_tablespaces              string      DDL
tablespace_encryption                string      DECRYPT_ONLY

Update SYS user password on the on-premise database

Both source and target SYS users password should match.

Update the source one with the same ExaCC one you are using:

SQL> alter user sys identified by "********";

Update source listener.ora with static entry

If the on-premise source database is not running on oracle restart (grid infra), you will have to add a static entry for DGMGRL service on the appropriate listener. Unfortunately ZDM does not do it.

SID_LIST_<listener_name> =
  (SID_LIST =
    (SID_DESC = 
      (GLOBAL_DBNAME = <dbname> _DGMGRL.<domain>) 
      (ORACLE_HOME = <ORACLE_HOME>) 
      (SID_NAME = <SID>)
    )
  )

If you do not do so and you will resume the migration, you will have the ZDM switchover steps, ZDM_SWITCHOVER_SRC, failing with following error:

PRGZ-3605 : Oracle Data Guard Broker switchover to database "ONPRZ_APP_001T" on database "ONPR" failed.
ONPRZ_APP_001T
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Thu Feb 22 09:46:57 2024
Version 19.10.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "ONPR"
Connected as SYSDG.
DGMGRL> Performing switchover NOW, please wait...
Operation requires a connection to database "onprz_app_001t"
Connecting ...
Connected to "ONPRZ_APP_001T"
Connected as SYSDBA.
New primary database "onprz_app_001t" is opening...
Operation requires start up of instance "ONPR" on database "onpr"
Starting instance "ONPR"...
Unable to connect to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(Host=vmONPR.domain.com)(Port=13000))(CONNECT_DATA=(SERVICE_NAME=ONPR_DGMGRL.domain.com)(INSTANCE_NAME=ONPR)(SERVER=DEDICATED)))
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
Archived log backup on the source

You will have to ensure that the source database archived log deletion policy is set appropriately and ensure not to remove any archived log that would not be applied on the standby. This all to ensure no source archived log is missing for Data Guard.

Convert target database to single instance

I have converted the target database on the ExaCC used during ZDM migration (the one taken as template by ZDM and where the final PDB will be hosted), from RAC to single instance. And this for 2 reasons:

  • The first one, as we will see later, ZDM will create the standby database on a new instance, using PDB final name as ORACLE_SID. And this temporary database is any how single instance
  • If the target database is RAC, ZDM will create a second UNDO tablespace in the single instance source database. I do not want to make any change in the source database. Also as I’m running version 19.10 on the source, the UNDO will be encrypted and more over I will face bug 31500699 and ZDM migration will fail in error.
Update cluster_database instance parameter
oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] sqh

SQL> show parameter cluster_database

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cluster_database                     boolean     TRUE
cluster_database_instances           integer     2

SQL> set line 300
SQL> col name for a30
SQL> col value for a30
SQL> select inst_id, name, value from gv$parameter where lower(name)='cluster_database';

   INST_ID NAME                           VALUE
---------- ------------------------------ ------------------------------
         2 cluster_database               TRUE
         1 cluster_database               TRUE

SQL> alter system set cluster_database=FALSE scope=spfile sid='*';

System altered.

Stop cluster database
oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl status database -d ONPR_RZ2
Instance ONPR1 is running on node ExaCC-cl01n1
Instance ONPR2 is running on node ExaCC-cl01n2

oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl stop database -d ONPR_RZ2

oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl status database -d ONPR_RZ2
Instance ONPR1 is not running on node ExaCC-cl01n1
Instance ONPR2 is not running on node ExaCC-cl01n2

Change grid infrastructure configuration

We will remove second instance.

oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl config database -d ONPR_RZ2
Database unique name: ONPR_RZ2
Database name: ONPR
Oracle home: /u02/app/oracle/product/19.0.0.0/dbhome_2
Oracle user: oracle
Spfile: +DATAC1/ONPR_RZ2/PARAMETERFILE/spfile.634.1160214211
Password file: +DATAC1/ONPR_RZ2/PASSWORD/pwdonpr_rz2.562.1160213439
Domain: domain.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATAC1
Mount point paths: /acfs01
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: racoper
Database instances: ONPR1,ONPR2
Configured nodes: ExaCC-cl01n1,ExaCC-cl01n2
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed
oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)]

oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl remove instance -d ONPR_RZ2 -i ONPR2
Remove instance from the database ONPR_RZ2? (y/[n]) y
oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)]

oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl config database -d ONPR_RZ2
Database unique name: ONPR_RZ2
Database name: ONPR
Oracle home: /u02/app/oracle/product/19.0.0.0/dbhome_2
Oracle user: oracle
Spfile: +DATAC1/ONPR_RZ2/PARAMETERFILE/spfile.634.1160214211
Password file: +DATAC1/ONPR_RZ2/PASSWORD/pwdonpr_rz2.562.1160213439
Domain: domain.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATAC1
Mount point paths: /acfs01
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: racoper
Database instances: ONPR1
Configured nodes: ExaCC-cl01n1
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed

Start target database ONPR on ExaCC
oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl start database -d ONPR_RZ2

oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] srvctl status database -d ONPR_RZ2
Instance ONPR1 is running on node ExaCC-cl01n1

As we can see, only one instance is running. This can also be double checked with the instance parameter.

oracle@ExaCC-cl01n1:~/ [ONPR1 (CDB$ROOT)] sqh

SQL> set lines 300 pages 500
SQL> col name for a30
SQL> col value for a30
SQL> select inst_id, name, value from gv$parameter where lower(name)='cluster_database';

   INST_ID NAME                           VALUE
---------- ------------------------------ ------------------------------
         1 cluster_database               FALSE

Evaluating ZDM Migration

We are now all ready to evaluate ZDM Migration. We will first run zdmcli with the -eval option to evaluate the migration and test if all is ok.

We need to provide some arguments :

ArgumentValue -sourcesidDatabase Name of the source database in case the source database is a single instance deployed on a non Grid Infrastructure environment -rspZDM response file -sourcenodeSource host -srcauth with 3 sub-arguments:
-srcarg1
-srcarg2
-srcarg3 Name of the source authentication plug-in with 3 sub-arguments:
1st argument: user. Should be oracle
2nd argument: ZDM private RSA Key
3rd argument: sudo location -targetnodeTarget host -tgtauth with 3 sub-arguments:
-tgtarg1
-tgtarg2
-tgtarg3 Name of the target authentication plug-in with 3 sub-arguments:
1st argument: user. Should be opc
2nd argument: ZDM private RSA Key
3rd argument: sudo location -tdekeystorepasswdSource database TDE keystore password -tgttdekeystorepasswdTarget container database TDE keystore password

All steps done for evaluation have been completed successfully:

[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli migrate database -sourcesid ONPR -rsp /home/zdmuser/migration/zdm_ONPR_physical_online.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa -srcarg3 sudo_location:/usr/bin/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo -tdekeystorepasswd -tgttdekeystorepasswd -eval
zdmhost.domain.com: Audit ID: 428
Enter source database ONPR SYS password:
Enter source database ONPR TDE keystore password:
Enter target container database TDE keystore password:
zdmhost: 2024-02-14T13:18:19.773Z : Processing response file ...
Operation "zdmcli migrate database" scheduled with the job ID "39".

[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli query job -jobid 39
zdmhost.domain.com: Audit ID: 434
Job ID: 39
User: zdmuser
Client: zdmhost
Job Type: "EVAL"
Scheduled job command: "zdmcli migrate database -sourcesid ONPR -rsp /home/zdmuser/migration/zdm_ONPR_physical_online.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa -srcarg3 sudo_location:/usr/bin/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo -tdekeystorepasswd -tgttdekeystorepasswd -eval"
Scheduled job execution start time: 2024-02-14T14:18:19+01. Equivalent local time: 2024-02-14 14:18:19
Current status: SUCCEEDED
Result file path: "/u01/app/oracle/chkbase/scheduled/job-39-2024-02-14-14:18:29.log"
Metrics file path: "/u01/app/oracle/chkbase/scheduled/job-39-2024-02-14-14:18:29.json"
Job execution start time: 2024-02-14 14:18:29
Job execution end time: 2024-02-14 14:21:18
Job execution elapsed time: 2 minutes 48 seconds
ZDM_GET_SRC_INFO ........... PRECHECK_PASSED
ZDM_GET_TGT_INFO ........... PRECHECK_PASSED
ZDM_PRECHECKS_SRC .......... PRECHECK_PASSED
ZDM_PRECHECKS_TGT .......... PRECHECK_PASSED
ZDM_SETUP_SRC .............. PRECHECK_PASSED
ZDM_SETUP_TGT .............. PRECHECK_PASSED
ZDM_PREUSERACTIONS ......... PRECHECK_PASSED
ZDM_PREUSERACTIONS_TGT ..... PRECHECK_PASSED
ZDM_VALIDATE_SRC ........... PRECHECK_PASSED
ZDM_VALIDATE_TGT ........... PRECHECK_PASSED
ZDM_POSTUSERACTIONS ........ PRECHECK_PASSED
ZDM_POSTUSERACTIONS_TGT .... PRECHECK_PASSED
ZDM_CLEANUP_SRC ............ PRECHECK_PASSED
ZDM_CLEANUP_TGT ............ PRECHECK_PASSED

Run the Migration with ZDM

We will run the migration adding a pause after the ZDM steps ZDM_CONFIGURE_DG_SRC. So ZDM will prepare all the environment (setting the environment, creating standby and configuring Data Guard). All this steps can be done without any downtime.

[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli migrate database -sourcesid ONPR -rsp /home/zdmuser/migration/zdm_ONPR_physical_online.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa -srcarg3 sudo_location:/usr/bin/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo -tdekeystorepasswd -tgttdekeystorepasswd -pauseafter ZDM_CONFIGURE_DG_SRC
zdmhost.domain.com: Audit ID: 543
Enter source database ONPR SYS password:
Enter source database ONPR TDE keystore password:
Enter target container database TDE keystore password:
zdmhost: 2024-02-22T09:27:17.864Z : Processing response file ...
Operation "zdmcli migrate database" scheduled with the job ID "44".
[zdmuser@zdmhost migration]$

[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli query job -jobid 44
zdmhost.domain.com: Audit ID: 551
Job ID: 44
User: zdmuser
Client: zdmhost
Job Type: "MIGRATE"
Scheduled job command: "zdmcli migrate database -sourcesid ONPR -rsp /home/zdmuser/migration/zdm_ONPR_physical_online.rsp -sourcenode vmonpr -srcauth zdmauth -srcarg1 user:oracle -srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa -srcarg3 sudo_location:/usr/bin/sudo -targetnode ExaCC-cl01n1 -tgtauth zdmauth -tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa -tgtarg3 sudo_location:/usr/bin/sudo -tdekeystorepasswd -tgttdekeystorepasswd -pauseafter ZDM_CONFIGURE_DG_SRC"
Scheduled job execution start time: 2024-02-22T10:27:17+01. Equivalent local time: 2024-02-22 10:27:17
Current status: PAUSED
Current Phase: "ZDM_CONFIGURE_DG_SRC"
Result file path: "/u01/app/oracle/chkbase/scheduled/job-44-2024-02-22-10:27:27.log"
Metrics file path: "/u01/app/oracle/chkbase/scheduled/job-44-2024-02-22-10:27:27.json"
Job execution start time: 2024-02-22 10:27:27
Job execution end time: 2024-02-22 10:39:38
Job execution elapsed time: 12 minutes 11 seconds
ZDM_GET_SRC_INFO ................ COMPLETED
ZDM_GET_TGT_INFO ................ COMPLETED
ZDM_PRECHECKS_SRC ............... COMPLETED
ZDM_PRECHECKS_TGT ............... COMPLETED
ZDM_SETUP_SRC ................... COMPLETED
ZDM_SETUP_TGT ................... COMPLETED
ZDM_PREUSERACTIONS .............. COMPLETED
ZDM_PREUSERACTIONS_TGT .......... COMPLETED
ZDM_VALIDATE_SRC ................ COMPLETED
ZDM_VALIDATE_TGT ................ COMPLETED
ZDM_DISCOVER_SRC ................ COMPLETED
ZDM_COPYFILES ................... COMPLETED
ZDM_PREPARE_TGT ................. COMPLETED
ZDM_SETUP_TDE_TGT ............... COMPLETED
ZDM_RESTORE_TGT ................. COMPLETED
ZDM_RECOVER_TGT ................. COMPLETED
ZDM_FINALIZE_TGT ................ COMPLETED
ZDM_CONFIGURE_DG_SRC ............ COMPLETED
ZDM_SWITCHOVER_SRC .............. PENDING
ZDM_SWITCHOVER_TGT .............. PENDING
ZDM_POST_DATABASE_OPEN_TGT ...... PENDING
ZDM_DATAPATCH_TGT ............... PENDING
ZDM_MODIFY_DBUNIQUENAME_TGT ..... PENDING
ZDM_NONCDBTOPDB_PRECHECK ........ PENDING
ZDM_NONCDBTOPDB_CONVERSION ...... PENDING
ZDM_POST_MIGRATE_TGT ............ PENDING
TIMEZONE_UPGRADE_PREPARE_TGT .... PENDING
TIMEZONE_UPGRADE_TGT ............ PENDING
ZDM_POSTUSERACTIONS ............. PENDING
ZDM_POSTUSERACTIONS_TGT ......... PENDING
ZDM_CLEANUP_SRC ................. PENDING
ZDM_CLEANUP_TGT ................. PENDING

Pause After Phase: "ZDM_CONFIGURE_DG_SRC"

We can see that all steps have been completed successfully, and ZDM has paused the migration after the Data Guard has been configured.

I can review full ZDM log:

[zdmuser@zdmhost ~]$ cat /u01/app/oracle/chkbase/scheduled/job-44-2024-02-22-10:27:27.log
zdmhost: 2024-02-22T09:27:27.341Z : Processing response file ...
zdmhost: 2024-02-22T09:27:27.345Z : Processing response file ...
zdmhost: 2024-02-22T09:27:32.418Z : Starting zero downtime migrate operation ...
zdmhost: 2024-02-22T09:27:34.498Z : Executing phase ZDM_GET_SRC_INFO
zdmhost: 2024-02-22T09:27:34.498Z : Retrieving information from source node "vmonpr" ...
zdmhost: 2024-02-22T09:27:34.498Z : retrieving information about database "ONPR" ...
zdmhost: 2024-02-22T09:27:38.793Z : Execution of phase ZDM_GET_SRC_INFO completed
zdmhost: 2024-02-22T09:27:38.819Z : Executing phase ZDM_GET_TGT_INFO
zdmhost: 2024-02-22T09:27:38.819Z : Retrieving information from target node "ExaCC-cl01n1" ...
zdmhost: 2024-02-22T09:27:46.173Z : Determined value for parameter TGT_DATADG is '+DATAC1'
zdmhost: 2024-02-22T09:27:46.173Z : Determined value for parameter TGT_REDODG is '+DATAC1'
zdmhost: 2024-02-22T09:27:46.173Z : Determined value for parameter TGT_RECODG is '+RECOC1'
zdmhost: 2024-02-22T09:27:46.284Z : Execution of phase ZDM_GET_TGT_INFO completed
zdmhost: 2024-02-22T09:27:46.821Z : Executing phase ZDM_PRECHECKS_SRC
zdmhost: 2024-02-22T09:27:46.821Z : Execution of phase ZDM_PRECHECKS_SRC completed
zdmhost: 2024-02-22T09:27:47.080Z : Executing phase ZDM_PRECHECKS_TGT
zdmhost: 2024-02-22T09:27:47.080Z : Execution of phase ZDM_PRECHECKS_TGT completed
zdmhost: 2024-02-22T09:27:47.118Z : Executing phase ZDM_SETUP_SRC
zdmhost: 2024-02-22T09:27:47.118Z : Setting up ZDM on the source node vmonpr ...
vmonpr: 2024-02-22T09:28:49.592Z : TNS aliases successfully setup on the source node vmonpr...
zdmhost: 2024-02-22T09:28:49.694Z : Execution of phase ZDM_SETUP_SRC completed
####################################################################
zdmhost: 2024-02-22T09:28:49.730Z : Executing phase ZDM_SETUP_TGT
zdmhost: 2024-02-22T09:28:49.730Z : Setting up ZDM on the target node ExaCC-cl01n1 ...
ExaCC-cl01n1: 2024-02-22T09:29:12.976Z : TNS aliases successfully setup on the target node ExaCC-cl01n1...
zdmhost: 2024-02-22T09:29:12.979Z : Execution of phase ZDM_SETUP_TGT completed
####################################################################
zdmhost: 2024-02-22T09:29:13.023Z : Executing phase ZDM_VALIDATE_SRC
zdmhost: 2024-02-22T09:29:13.024Z : Validating source environment on node vmonpr ...
vmonpr: 2024-02-22T09:29:23.649Z : Validating SYS account password specified..
vmonpr: 2024-02-22T09:29:34.470Z : Validating source environment...
vmonpr: 2024-02-22T09:29:34.470Z : Ensuring source database is running in ARCHIVELOG mode...
vmonpr: 2024-02-22T09:29:34.871Z : Validating Oracle TDE setup
vmonpr: 2024-02-22T09:29:37.474Z : Validating Oracle Password file
vmonpr: 2024-02-22T09:29:38.476Z : Validating database ONPR role is PRIMARY...
vmonpr: 2024-02-22T09:29:38.478Z : Source environment validated successfully
zdmhost: 2024-02-22T09:29:38.487Z : Execution of phase ZDM_VALIDATE_SRC completed
####################################################################
zdmhost: 2024-02-22T09:29:38.521Z : Executing phase ZDM_VALIDATE_TGT
zdmhost: 2024-02-22T09:29:38.521Z : Validating target environment on node ExaCC-cl01n1 ...
zdmhost: 2024-02-22T09:29:38.573Z : Source database timezone file version 32 is less than target database timezone file version 42. Timezone upgrade operation will be performed on target database after completion of database migration.
ExaCC-cl01n1: 2024-02-22T09:29:50.315Z : Validating specified Oracle ASM storage locations...
ExaCC-cl01n1: 2024-02-22T09:29:54.219Z : validating target database size allocation...
ExaCC-cl01n1: 2024-02-22T09:29:56.922Z : Verifying SQL*Net connectivity to source database ...
ExaCC-cl01n1: 2024-02-22T09:29:57.223Z : verifying passwordless connectivity between target nodes
ExaCC-cl01n1: 2024-02-22T09:29:58.425Z : Target environment validated successfully
zdmhost: 2024-02-22T09:29:58.433Z : Execution of phase ZDM_VALIDATE_TGT completed
####################################################################
zdmhost: 2024-02-22T09:29:58.455Z : Executing phase ZDM_DISCOVER_SRC
zdmhost: 2024-02-22T09:29:58.455Z : Setting up the source node vmonpr for creating standby on the target node ExaCC-cl01n1 ...
vmonpr: 2024-02-22T09:30:09.186Z : Enabling force logging on database ONPR...
vmonpr: 2024-02-22T09:30:09.287Z : Creating standby logs on database ONPR...
vmonpr: 2024-02-22T09:30:13.591Z : Source environment set up successfully
zdmhost: 2024-02-22T09:30:13.700Z : Execution of phase ZDM_DISCOVER_SRC completed
####################################################################
zdmhost: 2024-02-22T09:30:13.729Z : Executing phase ZDM_COPYFILES
zdmhost: 2024-02-22T09:30:13.729Z : Copying files from source node vmonpr to target node ExaCC-cl01n1 ...
vmonpr: 2024-02-22T09:30:24.849Z : Source database "ONPR" credentials exported successfully on node "vmonpr"
zdmhost: 2024-02-22T09:30:29.112Z : Execution of phase ZDM_COPYFILES completed
####################################################################
zdmhost: 2024-02-22T09:30:29.148Z : Executing phase ZDM_PREPARE_TGT
zdmhost: 2024-02-22T09:30:29.148Z : Setting up standby on the target node ExaCC-cl01n1 ...
ExaCC-cl01n1: 2024-02-22T09:31:03.106Z : Target environment set up successfully
zdmhost: 2024-02-22T09:31:03.115Z : Execution of phase ZDM_PREPARE_TGT completed
####################################################################
zdmhost: 2024-02-22T09:31:03.137Z : Executing phase ZDM_SETUP_TDE_TGT
zdmhost: 2024-02-22T09:31:03.137Z : Setting up Oracle Transparent Data Encryption (TDE) keystore on the target node ExaCC-cl01n1 ...
ExaCC-cl01n1: 2024-02-22T09:31:13.880Z : target environment Oracle Transparent Data Encryption (TDE) set up successfully
zdmhost: 2024-02-22T09:31:13.889Z : Execution of phase ZDM_SETUP_TDE_TGT completed
####################################################################
zdmhost: 2024-02-22T09:31:13.913Z : Executing phase ZDM_RESTORE_TGT
zdmhost: 2024-02-22T09:31:13.913Z : Restoring database on the target node ExaCC-cl01n1 ...
ExaCC-cl01n1: 2024-02-22T09:31:36.483Z : database ONPRZ_APP_001T dropped successfully
ExaCC-cl01n1: 2024-02-22T09:31:54.048Z : Target database "ONPRZ_APP_001T" credentials staged successfully on node "ExaCC-cl01n1"
ExaCC-cl01n1: 2024-02-22T09:32:06.392Z : Restoring SPFILE ...
ExaCC-cl01n1: 2024-02-22T09:32:45.923Z : SPFILE restored to /u02/app/oracle/product/19.0.0.0/dbhome_2/dbs/spfileONPRZ_APP_001T1.ora successfully
ExaCC-cl01n1: 2024-02-22T09:32:59.196Z : Restoring control files ...
ExaCC-cl01n1: 2024-02-22T09:33:20.313Z : Control files restored successfully
ExaCC-cl01n1: 2024-02-22T09:33:31.775Z : Restoring and encrypting data files ...
ExaCC-cl01n1: 2024-02-22T09:34:38.628Z : Data files restored and encrypted successfully
ExaCC-cl01n1: 2024-02-22T09:34:38.629Z : Cleaning up any orphaned data ...
ExaCC-cl01n1: 2024-02-22T09:34:38.730Z : Orphaned files clean up successful
ExaCC-cl01n1: 2024-02-22T09:34:39.034Z : Data files restored successfully
ExaCC-cl01n1: 2024-02-22T09:34:51.398Z : Renaming TEMP files and online redo log files ...
ExaCC-cl01n1: 2024-02-22T09:35:03.909Z : TEMP files and online redo log files renamed successfully
ExaCC-cl01n1: 2024-02-22T09:35:16.374Z : Recovering data files ...
ExaCC-cl01n1: 2024-02-22T09:35:20.080Z : Data files recovered successfully
zdmhost: 2024-02-22T09:35:20.094Z : Execution of phase ZDM_RESTORE_TGT completed
####################################################################
zdmhost: 2024-02-22T09:35:20.115Z : Executing phase ZDM_RECOVER_TGT
zdmhost: 2024-02-22T09:35:20.115Z : Recovering database on the target node ExaCC-cl01n1 ...
ExaCC-cl01n1: 2024-02-22T09:35:37.674Z : Target database "ONPRZ_APP_001T" credentials staged successfully on node "ExaCC-cl01n1"
ExaCC-cl01n1: 2024-02-22T09:35:50.018Z : Restoring control files ...
ExaCC-cl01n1: 2024-02-22T09:36:31.849Z : Running RMAN crosscheck on database "ONPRZ_APP_001T" ...
ExaCC-cl01n1: 2024-02-22T09:36:36.354Z : RMAN crosscheck on database "ONPRZ_APP_001T" ran successfully
ExaCC-cl01n1: 2024-02-22T09:36:36.354Z : Running RMAN catalog ...
ExaCC-cl01n1: 2024-02-22T09:36:38.357Z : RMAN catalog ran successfully
ExaCC-cl01n1: 2024-02-22T09:36:38.357Z : Control files restored successfully
ExaCC-cl01n1: 2024-02-22T09:36:50.818Z : Starting incremental restore of data files ...
ExaCC-cl01n1: 2024-02-22T09:36:55.122Z : Incremental restore of data files executed successfully
ExaCC-cl01n1: 2024-02-22T09:36:55.123Z : Cleaning up any orphaned data ...
ExaCC-cl01n1: 2024-02-22T09:36:55.224Z : Orphaned files clean up successful
ExaCC-cl01n1: 2024-02-22T09:36:55.327Z : Data files restored successfully
ExaCC-cl01n1: 2024-02-22T09:37:07.696Z : Renaming TEMP files and online redo log files ...
ExaCC-cl01n1: 2024-02-22T09:37:20.207Z : TEMP files and online redo log files renamed successfully
ExaCC-cl01n1: 2024-02-22T09:37:32.669Z : Recovering data files ...
ExaCC-cl01n1: 2024-02-22T09:37:36.575Z : Data files recovered successfully
zdmhost: 2024-02-22T09:37:36.590Z : Execution of phase ZDM_RECOVER_TGT completed
####################################################################
zdmhost: 2024-02-22T09:37:36.614Z : Executing phase ZDM_FINALIZE_TGT
zdmhost: 2024-02-22T09:37:36.614Z : Finalizing creation of standby database on the target node ExaCC-cl01n1 ...
ExaCC-cl01n1: 2024-02-22T09:37:47.554Z : Updating database cluster resource dependency ...
ExaCC-cl01n1: 2024-02-22T09:38:30.389Z : Creating standby redo logs on target database ONPRZ_APP_001T
ExaCC-cl01n1: 2024-02-22T09:38:30.490Z : Enabling Oracle Data Guard Broker on "ONPRZ_APP_001T" ...
ExaCC-cl01n1: 2024-02-22T09:38:33.493Z : Oracle Data Guard Broker enabled successfully on "ONPRZ_APP_001T"
ExaCC-cl01n1: 2024-02-22T09:38:33.694Z : Target database updated successfully
zdmhost: 2024-02-22T09:38:33.704Z : Execution of phase ZDM_FINALIZE_TGT completed
####################################################################
zdmhost: 2024-02-22T09:38:33.725Z : Executing phase ZDM_CONFIGURE_DG_SRC
zdmhost: 2024-02-22T09:38:33.726Z : Finalize steps done on the source node vmonpr for creating standby on the target node ExaCC-cl01n1 ...
vmonpr: 2024-02-22T09:38:44.648Z : Configuring Oracle Data Guard Broker on "ONPR" ...
vmonpr: 2024-02-22T09:39:38.289Z : Oracle Data Guard Broker configured successfully on "ONPR"
vmonpr: 2024-02-22T09:39:38.390Z : Source database updated successfully
zdmhost: 2024-02-22T09:39:38.398Z : Execution of phase ZDM_CONFIGURE_DG_SRC completed
####################################################################
zdmhost: 2024-02-22T09:39:38.403Z : Job execution paused after phase "ZDM_CONFIGURE_DG_SRC".

We can see ZDM will valide source and target, check SYS password, check and setup TDE, validate Oracle Net connections, validate ASM storage, create standby redo logs, creating standby database using direct transfer data method (the application tablespace will be encrypted), and configure Data Guard.

We can check additional and detailed logs on the source database:

oracle@vmonpr:/home/oracle/ [ONPR] cd  /u00/app/oracle/zdm/zdm_ONPR_44/zdm/log/

oracle@vmonpr:/u00/app/oracle/zdm/zdm_ONPR_44/zdm/log/ [ONPR] ls -ltrh
total 228K
-rw-rw-rw-. 1 oracle dba  14K Feb 22 10:28 zdm_setup_tns_alias_src_12728.log
-rwxrwxrwx. 1 oracle dba    0 Feb 22 10:29 default.log
-rw-rw-rw-. 1 oracle dba  17K Feb 22 10:29 zdm_validate_sys_pass_src_13181.log
-rw-rw-rw-. 1 root   root 57K Feb 22 10:29 zdm_validate_src_13240.log
-rw-rw-rw-. 1 root   root 62K Feb 22 10:30 zdm_discover_src_13722.log
-rw-rw-rw-. 1 root   root 22K Feb 22 10:30 zdm_export_db_cred_src_15024.log
-rw-rw-rw-. 1 oracle dba  43K Feb 22 10:39 zdm_configure_dg_src_18159.log

And on the target (ExaCC):

[opc@ExaCC-cl01n1 ~]$ cd /u02/app/oracle/zdm/zdm_ONPR_RZ2_44/zdm/log/

[opc@ExaCC-cl01n1 log]$ ls -ltrh
total 516K
-rw-rw-rw- 1 oracle root  33K Feb 22 10:29 zdm_setup_tns_alias_tgt_224683.log
-rwxrwxrwx 1 oracle root    0 Feb 22 10:29 default.log
-rw-rw-rw- 1 oracle root  29K Feb 22 10:29 zdm_validate_tgt_231351.log
-rw-rw-rw- 1 root   root  59K Feb 22 10:31 zdm_prepare_tgt_239052.log
-rw-rw-rw- 1 root   root 7.9K Feb 22 10:31 zdm_setup_tde_tgt_242739.log
-rw-rw-rw- 1 root   root  16K Feb 22 10:31 zdm_oss_restore_tgt_dropdatabase_244302.log
-rw-rw-rw- 1 root   root  21K Feb 22 10:31 zdm_import_db_cred_tgt_250414.log
-rw-rw-rw- 1 root   root  24K Feb 22 10:32 zdm_oss_restore_tgt_restoreinit_252706.log
-rw-rw-rw- 1 root   root  19K Feb 22 10:33 zdm_oss_restore_tgt_restorecntrl_260310.log
-rw-rw-rw- 1 root   root  37K Feb 22 10:34 zdm_oss_restore_tgt_restoredb_263322.log
-rw-rw-rw- 1 root   root  36K Feb 22 10:35 zdm_oss_restore_tgt_renametemp_279405.log
-rw-rw-rw- 1 root   root  22K Feb 22 10:35 zdm_oss_restore_tgt_recoverdb_281523.log
-rw-rw-rw- 1 root   root  22K Feb 22 10:35 zdm_import_db_cred_tgt_283595.log
-rw-rw-rw- 1 root   root  28K Feb 22 10:36 zdm_oss_recover_tgt_restorecntrl_288735.log
-rw-rw-rw- 1 root   root  37K Feb 22 10:36 zdm_oss_recover_tgt_restoredb_297044.log
-rw-rw-rw- 1 root   root  36K Feb 22 10:37 zdm_oss_recover_tgt_renametemp_298270.log
-rw-rw-rw- 1 root   root  22K Feb 22 10:37 zdm_oss_recover_tgt_recoverdb_300955.log
-rw-rw-rw- 1 root   root  41K Feb 22 10:38 zdm_finalize_tgt_305026.log

Finally I can check Data Guard configuration and see that my standby is synchronized (no gap).

oracle@vmonpr:/home/oracle/ [ONPR] dgmgrl
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Thu Feb 22 10:42:39 2024
Version 19.10.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.

DGMGRL> connect /
Connected to "ONPR"
Connected as SYSDG.

DGMGRL> show configuration lag

Configuration - ZDM_onpr

  Protection Mode: MaxPerformance
  Members:
  onpr           - Primary database
    onprz_app_001t - Physical standby database
                     Transport Lag:      0 seconds (computed 1 second ago)
                     Apply Lag:          0 seconds (computed 1 second ago)

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 36 seconds ago)

I can see that ZDM created a new instance on the ExaCC, called as the final PDB name:

oracle@ExaCC-cl01n1:/u02/app/oracle/zdm/zdm_ONPR_RZ2_38/zdm/log/ [ONPR1 (CDB$ROOT)] ps -ef | grep [p]mon | grep -i ONPRZ_APP_001T1
oracle   236556      1  0 12:22 ?        00:00:00 ora_pmon_ONPRZ_APP_001T1

If we connect to it, we can see that the instance name is as the final PDB name and the db name is matching the source database one to migrate. The db_unique_name will be the pdb name.

oracle@ExaCC-cl01n1:/u02/app/oracle/product/19.0.0.0/dbhome_2/ [ONPR1 (CDB$ROOT)] export ORACLE_SID=ONPRZ_APP_001T1

oracle@ExaCC-cl01n1:/u02/app/oracle/product/19.0.0.0/dbhome_2/ [ONPRZ_APP_001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 14 13:22:42 2024
Version 19.21.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.21.0.0.0

SQL> set lines 300 pages 500
SQL> select instance_name from v$instance;

INSTANCE_NAME
------------------------------------------------
ONPRZ_APP_001T1

SQL> select name, db_unique_name, open_mode, database_role from v$database;

NAME                        DB_UNIQUE_NAME                                                                             OPEN_MODE                                                    DATABASE_ROLE
--------------------------- ------------------------------------------------------------------------------------------ ------------------------------------------------------------ ------------------------------------------------
ONPR                        ONPRZ_APP_001T                                                                             MOUNTED                                                      PHYSICAL STANDBY

And I could see that this temporary database was single instance database:

SQL> show parameter cluster_database;

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
cluster_database                     boolean                           FALSE
cluster_database_instances           integer                           1

Migration – Maintenance Windows

Now we will have the maintenance windows and switch the database to the ExaCC. After the switchover, ZDM will run datapatch (to patch the database to the new version 19.21), convert the noncdb to pdb, upgrade the timezone and any other post migration tasks.

For this we just need to resume the job. We could even resume it adding a new pause if we want to do each steps separately.

To resume the job:

[zdmuser@zdmhost migration]$ /u01/app/oracle/product/zdm/bin/zdmcli resume job -jobid 44
zdmhost.domain.com: Audit ID: 552

Wrap up

ZDM is really a nice tool, and ZDM will mainly configure all for you. Albeit I was facing sometimes some problem that I had to troubleshoot, I could always find a solution. Oracle ZDM team is also very flexible and available to discuss. Knowing we have all the on-premises databases running 19.10, and as customer does not want to go without fallback, I could unfortunately not test the switchover and next conversion steps. I will blog later on it, once customer databases will be patched to last 19c version, and that I will be able to move forward with the test.

L’article Physical Online Migration to ExaCC with Oracle Zero Downtime Migration (ZDM) est apparu en premier sur dbi Blog.

A PDB as a Cluster Resource in RAC

Hemant K Chitale - Fri, 2024-02-23 03:40

 In my previous post, I have demonstrated how a new service can be created for a Pluggable Database (PDB) in Oracle RAC.  

Custom Services are configured as Resources.  Thus, in that post, I have demonstrated the custom service "newservice" for the PDB "hemantpdb" being configured in the Cluster.

However, PDBs can also be configured and managed as resources.   (see the reference in the Oracle Database Features documentation).

This is the current listing of resources in my 2-node 21c RAC Cluster :



[grid@node1 ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
ora.chad
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
ora.net1.network
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
ora.ons
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       node1                    STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       node1                    STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       node2                    STABLE
ora.RECO.dg(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    Started,STABLE
      2        ONLINE  ONLINE       node2                    Started,STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.cdp1.cdp
      1        ONLINE  ONLINE       node1                    STABLE
ora.cdp2.cdp
      1        ONLINE  ONLINE       node1                    STABLE
ora.cdp3.cdp
      1        ONLINE  ONLINE       node2                    STABLE
ora.cvu
      1        ONLINE  ONLINE       node1                    STABLE
ora.db21crac.db
      1        ONLINE  ONLINE       node2                    Open,HOME=/u01/app/o
                                                             racle/product/21.3.0
                                                             .0/dbhome_1,STABLE
      2        ONLINE  ONLINE       node1                    Open,HOME=/u01/app/o
                                                             racle/product/21.3.0
                                                             .0/dbhome_1,STABLE
ora.db21crac.hemantpdb.pdb
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.db21crac.newservice.svc
      1        ONLINE  ONLINE       node1                    STABLE
ora.node1.vip
      1        ONLINE  ONLINE       node1                    STABLE
ora.node2.vip
      1        ONLINE  ONLINE       node2                    STABLE
ora.qosmserver
      1        ONLINE  ONLINE       node1                    STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       node1                    STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       node1                    STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       node2                    STABLE
--------------------------------------------------------------------------------
[grid@node1 ~]$


Above, we can see 
the database resource "ora.db21crac.db"
the pdb resource "ora.db21crac.hemantpdb.pdb"
the service resource "ora.db21crac.newservice.svc"  (which I created in my previous post)

Now, I will add a new PDB "pdb2"


[oracle@node1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 21.0.0.0.0 - Production on Fri Feb 23 17:04:31 2024
Version 21.3.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.


Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0

SQL> create pluggable database pdb2 admin user pdb2_admin identified by pdb2_admin;

Pluggable database created.

SQL>
SQL> col name format a30
SQL> select name, open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
HEMANTPDB                      READ WRITE
PDB2                           MOUNTED

SQL>

[grid@node1 ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
ora.chad
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
ora.net1.network
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
ora.ons
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       node1                    STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       node1                    STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       node2                    STABLE
ora.RECO.dg(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    Started,STABLE
      2        ONLINE  ONLINE       node2                    Started,STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.cdp1.cdp
      1        ONLINE  ONLINE       node1                    STABLE
ora.cdp2.cdp
      1        ONLINE  ONLINE       node1                    STABLE
ora.cdp3.cdp
      1        ONLINE  ONLINE       node2                    STABLE
ora.cvu
      1        ONLINE  ONLINE       node1                    STABLE
ora.db21crac.db
      1        ONLINE  ONLINE       node2                    Open,HOME=/u01/app/o
                                                             racle/product/21.3.0
                                                             .0/dbhome_1,STABLE
      2        ONLINE  ONLINE       node1                    Open,HOME=/u01/app/o
                                                             racle/product/21.3.0
                                                             .0/dbhome_1,STABLE
ora.db21crac.hemantpdb.pdb
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.db21crac.newservice.svc
      1        ONLINE  ONLINE       node1                    STABLE
ora.node1.vip
      1        ONLINE  ONLINE       node1                    STABLE
ora.node2.vip
      1        ONLINE  ONLINE       node2                    STABLE
ora.qosmserver
      1        ONLINE  ONLINE       node1                    STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       node1                    STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       node1                    STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       node2                    STABLE
--------------------------------------------------------------------------------
[grid@node1 ~]$


The new PDB, "pdb2" has been created but not opened yet (it is still only in the MOUNT state). However, if I try to manually add it, I get an error message that CARDINALITY wasn't specified for the first PDB that had been manually created from the SQL command line but not registered via srvctl. However, I *can* add the new PDB without having to specify CARDINALITY and then verify the status in the Cluster :

[oracle@node1 ~]$ srvctl add pdb -db DB21CRAC -pdb PDB2 -cardinality ALL -startoption open
PRCZ-4036 : failed to create pluggable database PDB2 with '-cardinality' option because this option was not specified for the previously created pluggable databases for database DB21CRAC

[oracle@node1 ~]$ srvctl config pdb -db DB21CRAC
Pluggable database name: HEMANTPDB
Application Root PDB:
Cardinality: %CRS_SERVER_POOL_SIZE%
Maximum CPU count (whole CPUs): 0
Minimum CPU count unit (1/100 CPU count): 0
Management policy: MANUAL
Rank value: 0
Start Option: open
Stop Option: immediate

[oracle@node1 ~]$ srvctl add pdb -db DB21CRAC -pdb PDB2  -startoption open

[oracle@node1 ~]$ srvctl config pdb -db DB21CRAC
Pluggable database name: HEMANTPDB
Application Root PDB:
Cardinality: %CRS_SERVER_POOL_SIZE%
Maximum CPU count (whole CPUs): 0
Minimum CPU count unit (1/100 CPU count): 0
Management policy: MANUAL
Rank value: 0
Start Option: open
Stop Option: immediate
Pluggable database name: PDB2
Application Root PDB:
Cardinality: %CRS_SERVER_POOL_SIZE%
Maximum CPU count (whole CPUs): 0
Minimum CPU count unit (1/100 CPU count): 0
Management policy: AUTOMATIC
Rank value: 0
Start Option: open
Stop Option: immediate

[grid@node1 ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
ora.chad
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
ora.net1.network
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
ora.ons
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       node1                    STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       node1                    STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       node2                    STABLE
ora.RECO.dg(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    Started,STABLE
      2        ONLINE  ONLINE       node2                    Started,STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.cdp1.cdp
      1        ONLINE  ONLINE       node1                    STABLE
ora.cdp2.cdp
      1        ONLINE  ONLINE       node1                    STABLE
ora.cdp3.cdp
      1        ONLINE  ONLINE       node2                    STABLE
ora.cvu
      1        ONLINE  ONLINE       node1                    STABLE
ora.db21crac.db
      1        ONLINE  ONLINE       node2                    Open,HOME=/u01/app/o
                                                             racle/product/21.3.0
                                                             .0/dbhome_1,STABLE
      2        ONLINE  ONLINE       node1                    Open,HOME=/u01/app/o
                                                             racle/product/21.3.0
                                                             .0/dbhome_1,STABLE
ora.db21crac.hemantpdb.pdb
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.db21crac.newservice.svc
      1        ONLINE  ONLINE       node1                    STABLE
ora.db21crac.pdb2.pdb
      1        OFFLINE OFFLINE                               STABLE
      2        OFFLINE OFFLINE                               STABLE
ora.node1.vip
      1        ONLINE  ONLINE       node1                    STABLE
ora.node2.vip
      1        ONLINE  ONLINE       node2                    STABLE
ora.qosmserver
      1        ONLINE  ONLINE       node1                    STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       node1                    STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       node1                    STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       node2                    STABLE
--------------------------------------------------------------------------------
[grid@node1 ~]$


If I then OPEN the new PDB on both instances, I can query the status again :

[oracle@node1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 21.0.0.0.0 - Production on Fri Feb 23 17:19:20 2024
Version 21.3.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.


Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0

SQL> alter pluggable database pdb2 open;

Pluggable database altered.

SQL>

[oracle@node2 ~]$ sqlplus / as sysdba

SQL*Plus: Release 21.0.0.0.0 - Production on Fri Feb 23 17:19:36 2024
Version 21.3.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.


Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0

SQL> alter pluggable database pdb2 open;

Pluggable database altered.

SQL>

[grid@node1 ~]$ crsctl status resource ora.db21crac.pdb2.pdb
NAME=ora.db21crac.pdb2.pdb
TYPE=ora.pdb.type
TARGET=ONLINE         , ONLINE
STATE=ONLINE on node1, ONLINE on node2

[grid@node1 ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
ora.chad
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
ora.net1.network
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
ora.ons
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       node1                    STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       node1                    STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       node2                    STABLE
ora.RECO.dg(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    Started,STABLE
      2        ONLINE  ONLINE       node2                    Started,STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.cdp1.cdp
      1        ONLINE  ONLINE       node1                    STABLE
ora.cdp2.cdp
      1        ONLINE  ONLINE       node1                    STABLE
ora.cdp3.cdp
      1        ONLINE  ONLINE       node2                    STABLE
ora.cvu
      1        ONLINE  ONLINE       node1                    STABLE
ora.db21crac.db
      1        ONLINE  ONLINE       node2                    Open,HOME=/u01/app/o
                                                             racle/product/21.3.0
                                                             .0/dbhome_1,STABLE
      2        ONLINE  ONLINE       node1                    Open,HOME=/u01/app/o
                                                             racle/product/21.3.0
                                                             .0/dbhome_1,STABLE
ora.db21crac.hemantpdb.pdb
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.db21crac.newservice.svc
      1        ONLINE  ONLINE       node1                    STABLE
ora.db21crac.pdb2.pdb
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.node1.vip
      1        ONLINE  ONLINE       node1                    STABLE
ora.node2.vip
      1        ONLINE  ONLINE       node2                    STABLE
ora.qosmserver
      1        ONLINE  ONLINE       node1                    STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       node1                    STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       node1                    STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       node2                    STABLE
--------------------------------------------------------------------------------
[grid@node1 ~]$


I can now add a custom service "pdb2service" for pdb2 to use DB21CRAC2 (instance 2, instead of instance 1) as preferred and with SESSION failover (and compare it with "newservice" for hemantpdb that I had added in the previous blogpost) :

[oracle@node1 ~]$ srvctl add service -db DB21CRAC  -service pdb2service -preferred DB21CRAC2 -available DB21CRAC1 \
> -tafpolicy BASIC -failovertype SESSION -pdb PDB2

[oracle@node1 ~]$ srvctl status service -db DB21CRAC -service pdb2service
Service pdb2service is not running.

[oracle@node1 ~]$ srvctl config service -db DB21CRAC -service pdb2service
Service name: pdb2service
Server pool:
Cardinality: 1
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Reset State: NONE
Failover type: SESSION
Failover method:
Failover retries:
Failover delay:
Failover restore: NONE
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Pluggable database name: PDB2
Hub service:
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Failback :  no
Replay Initiation Time: 300 seconds
Drain timeout:
Stop option:
Session State Consistency: DYNAMIC
GSM Flags: 0
Service is enabled
Preferred instances: DB21CRAC2
Available instances: DB21CRAC1
CSS critical: no
[oracle@node1 ~]$


[oracle@node1 ~]$ srvctl config service -db DB21CRAC -service newservice
Service name: newservice
Server pool:
Cardinality: 1
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Reset State: NONE
Failover type: SELECT
Failover method:
Failover retries:
Failover delay:
Failover restore: NONE
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Pluggable database name: HEMANTPDB
Hub service:
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Failback :  no
Replay Initiation Time: 300 seconds
Drain timeout:
Stop option:
Session State Consistency: DYNAMIC
GSM Flags: 0
Service is enabled
Preferred instances: DB21CRAC1
Available instances: DB21CRAC2
CSS critical: no
Service uses Java: false
[oracle@node1 ~]$

[oracle@node1 ~]$ srvctl start service -db DB21CRAC -service pdb2service

[oracle@node1 ~]$ srvctl config service -db DB21CRAC -service pdb2service
Service name: pdb2service
Server pool:
Cardinality: 1
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Reset State: NONE
Failover type: SESSION
Failover method:
Failover retries:
Failover delay:
Failover restore: NONE
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Pluggable database name: PDB2
Hub service:
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Failback :  no
Replay Initiation Time: 300 seconds
Drain timeout:
Stop option:
Session State Consistency: DYNAMIC
GSM Flags: 0
Service is enabled
Preferred instances: DB21CRAC2
Available instances: DB21CRAC1
CSS critical: no
Service uses Java: false
[oracle@node1 ~]$

[oracle@node1 ~]$ srvctl status service -db DB21CRAC -service pdb2service
Service pdb2service is running on instance(s) DB21CRAC2

[oracle@node1 ~]$ srvctl status service -db DB21CRAC -service newservice
Service newservice is running on instance(s) DB21CRAC1
[oracle@node1 ~]$


[grid@node1 ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
ora.chad
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
ora.net1.network
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
ora.ons
               ONLINE  ONLINE       node1                    STABLE
               ONLINE  ONLINE       node2                    STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       node1                    STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       node1                    STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       node2                    STABLE
ora.RECO.dg(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    Started,STABLE
      2        ONLINE  ONLINE       node2                    Started,STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.cdp1.cdp
      1        ONLINE  ONLINE       node1                    STABLE
ora.cdp2.cdp
      1        ONLINE  ONLINE       node1                    STABLE
ora.cdp3.cdp
      1        ONLINE  ONLINE       node2                    STABLE
ora.cvu
      1        ONLINE  ONLINE       node1                    STABLE
ora.db21crac.db
      1        ONLINE  ONLINE       node2                    Open,HOME=/u01/app/o
                                                             racle/product/21.3.0
                                                             .0/dbhome_1,STABLE
      2        ONLINE  ONLINE       node1                    Open,HOME=/u01/app/o
                                                             racle/product/21.3.0
                                                             .0/dbhome_1,STABLE
ora.db21crac.hemantpdb.pdb
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.db21crac.newservice.svc
      1        ONLINE  ONLINE       node1                    STABLE
ora.db21crac.pdb2.pdb
      1        ONLINE  ONLINE       node1                    STABLE
      2        ONLINE  ONLINE       node2                    STABLE
ora.db21crac.pdb2service.svc
      1        ONLINE  ONLINE       node2                    STABLE
ora.node1.vip
      1        ONLINE  ONLINE       node1                    STABLE
ora.node2.vip
      1        ONLINE  ONLINE       node2                    STABLE
ora.qosmserver
      1        ONLINE  ONLINE       node1                    STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       node1                    STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       node1                    STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       node2                    STABLE
--------------------------------------------------------------------------------
[grid@node1 ~]$




So, I have two PDBs "hemantpdb" and "pdb2" and their corresponding services ("newservice" and "pdb2service") using alternate database instances as Preferred Instances. "hemantpdb"'s service "newservices" runs on node1 (database instance DB21CRAC1) and "pdb2"'s service "pdb2service" runs on node2 (database instance DB21CRAC2) while both PDBs are actually OPEN on both nodes.

Categories: DBA Blogs

Formatting numbers to group exponent by 3

Tom Kyte - Thu, 2024-02-22 18:06
Hi All It's my first question here :) I read about formatting number in scientific notation, but I don't find a way to group exponent by 3 For instance, I would have this result Number -> result 1 -> 001.0E0 12 -> 012.0E0 123 -> 123.0E0 1234 -> 001.2E3 12345 -> 012.3E3 123456 -> 123.4E3 1234567 -> 001.2E6 ... is this a way to do this in a to_char function with the format clause ? Regards Eddy
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator