Feed aggregator

Impact of Force Logging

Bobby Durrett's DBA Blog - Mon, 2019-09-23 17:29

I am working on upgrading an Oracle database from 11.2.0.4 to 19c and migrating it from HP Unix to Linux. This 15-terabyte database is too large to copy from the old to the new system during our normal weekend downtime window. It also has a ton of weekend batch updates that overlap the normal weekend change window so it would be best for our business processing if the cut over from the old to the new system was as fast as possible.

I want to use GoldenGate to minimize the downtime for the cutover using an approach similar to what is described in this Oracle document:

Zero Downtime Database Upgrade Using Oracle GoldenGate

You start GoldenGate collecting changes on the current production system and then take your time copying the 15 TB of data from the old to new system. Once you are done with the initial load you apply the changes that happened in the meanwhile. Finally, you cut over to the new system. You could even switch the direction of the replication to push changes on the new production system back to the old system to allow for a mid-week back out several days after your upgrade. Pretty cool. A teammate of mine successfully used this approach on an important database some years back.

But the database that I am working on now, unlike the one that my colleague worked on, has a lot of tables set to nologging. Under the right conditions inserts into tables set to nologging are not written to the redo logs and will be missed by GoldenGate. This Oracle article recommends setting your database to FORCE LOGGING so GoldenGate will not miss any updates:

In order to ensure that the required redo information is contained in the Oracle redo logs for segments being replicated, it is important to override any NOLOGGING operations which would prevent the required redo information from being generated. If you are replicating the entire database, enable database force logging mode.

Oracle GoldenGate Performance Best Practices

We could also switch all our application tables and partitions in the source system to logging but we have so many I think we would set the whole database to force logging.

But the big question which I touched on in my previous post is whether force logging will slow down our weekend batch processing so much that we miss our deadlines for weekend processing to complete and affect our business in a negative way. The more I investigate it the more convinced I am that force logging will have minimal impact on our weekend jobs. This is an unexpected and surprising result. I really thought that our batch processing relied heavily on nologging writes to get the performance they need. It makes me wonder why we are using nologging in the first place. It would be a lot better for backup and recovery to have all our inserts logged to the redo logs. Here is a nice Oracle Support document that lays out the pros and cons of using nologging:

The Gains and Pains of Nologging Operations (Doc ID 290161.1)

I have an entry in my notes for this upgrade project dated 8/26/19 in which I wrote “Surely force logging will bog the … DB down”. Now I think the opposite. So, what changed my mind? It started with the graph from the previous post:

Graph From Previous Post with Little Direct Writes I/O

I was really surprised that the purple line was so low compared to the other two. But I felt like I needed to dig deeper to make sure that I was not just misunderstanding these metrics. The last thing I want to do is make some production change that slows down our weekend processes that already struggle to meet their deadlines. I was not sure what other metrics to look at since I could not find something that directly measures non-logged writes. But then I got the idea of using ASH data.

In my “Fast way to copy data into a table” post I said that to copy data quickly between two Oracle tables “you want everything done nologging, in parallel, and using direct path”. I may have known then and forgotten but working on this now has me thinking about the relationship between these three ways of speeding up inserts into tables. I think there are the following two dependencies:

  • Nologging requires direct path
  • Parallel requires direct path

Oracle document “Check For Logging / Nologging On DB Object(s) (Doc ID 269274.1)” says the first one. In the second case if you have a target table set to parallel degree > 1 and you enable parallel DML you get direct path writes when you insert into the target table.

From all this I got the idea to look for direct path write waits in the ASH views. I could use ASH to identify insert statements that are using direct path. Then I could check that the target tables or partitions are set to nologging. Then I would know they are doing non-logged writes even if I did not have a metric that said so directly.

directwritesql.sql looked at all the SQL statements that had direct write waits over the entire 6 weeks of our AWR history. The output looks like this:

     select
  2  sql_id,count(*) active
  3  from DBA_HIST_ACTIVE_SESS_HISTORY a
  4  where
  5  event = 'direct path write'
  6  group by sql_id
  7  order by active desc;

SQL_ID            ACTIVE
------------- ----------
2pfzwmtj41guu         99
g11qm73a4w37k         88
6q4kuj30agxak         58
fjxzfp4yagm0w         53
bvtzn333rp97k         39
6as226jb93ggd         38
0nx4fsb5gcyzb         36
6gtnb9t0dfj4w         31
3gatgc878pqxh         31
cq433j04qgb18         25

These numbers startled me because they were so low. Each entry in DBA_HIST_ACTIVE_SESS_HISTORY represents 10 seconds of activity. So over 6 weeks our top direct path write waiter waited 990 seconds. Given that we have batch processes running full out for a couple of days every weekend 990 seconds over 6 weekends is nothing.

I took the top SQL ids and dumped out the SQL text to see what tables they were inserting into. Then I queried the LOGGING column of dba_tables and dba_tab_partitions to see which insert was going into a table or partition set to nologging.

select logging,table_name
from dba_tables
where owner='MYOWNER' and
table_name in
(
... tables inserted into ...
)
order by table_name;


select logging,table_name,count(*) cnt
from dba_tab_partitions
where table_owner='MYOWNER' and
table_name in
(
... tables inserted into ...
)
group by logging,table_name
order by table_name,cnt desc;

This simple check for LOGGING or NOLOGGING status eliminated several of the top direct path write waiters. This process reduced the list of SQL ids down to three top suspects:

SQL_ID            ACTIVE
------------- ----------
cq433j04qgb18         25
71sr61v1rmmqc         17
0u0drxbt5qtqk         11

These are all inserts that are not logged. Notice that the most active one has 250 seconds of direct path write waits over the past 6 weeks. Surely enabling force logging could not cause more than about that much additional run time over the same length of time.

I got the idea of seeing what percentage of the total ASH time was direct path write waits for each of these SQL statements. In every case it was small:

cq433j04qgb18

TOTAL_SAMPLE_COUNT DW_SAMPLE_COUNT DW_SAMPLE_PCT
------------------ --------------- -------------
              2508              25    .996810207
 
71sr61v1rmmqc

TOTAL_SAMPLE_COUNT DW_SAMPLE_COUNT DW_SAMPLE_PCT
------------------ --------------- -------------
              1817              17    .935608145

0u0drxbt5qtqk

TOTAL_SAMPLE_COUNT DW_SAMPLE_COUNT DW_SAMPLE_PCT
------------------ --------------- -------------
              8691              11    .126567714

TOTAL_SAMPLE_COUNT was all the samples for that SQL_ID value for the past 6 weeks. DW_SAMPLE_COUNT is the same count of samples that are direct write waits that we already talked about. DW_SAMPLE_PCT is the percentage of the total samples that were direct write wait events. They were all around 1% or lower which means that write I/O time was only about 1% of the entire run time of these inserts. The rest was query processing best I can tell.

Also I used my sqlstat3 script to look at the average run time for these inserts:

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
71sr61v1rmmqc      3333358322 01-SEP-19 12.00.46.381 PM                1         2768879.28         892080    207085.624                  0                      0                  3.817             9028323            1045428               19337954
71sr61v1rmmqc      3333358322 08-SEP-19 10.00.43.551 AM                0         264428.594          98840     28257.339                  0                      0                  3.657              177736             143345                      0
71sr61v1rmmqc      3333358322 08-SEP-19 11.00.49.648 AM                1          2352509.9         767440    160933.191                  0                      0                      0             8729437             791837               19110340
71sr61v1rmmqc      3333358322 15-SEP-19 11.00.03.027 AM                1         3090070.21         904310    190593.062                  0                      0                  2.192             9095421             949579               19470026

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
0u0drxbt5qtqk       382840242 01-SEP-19 02.00.23.436 AM                1         29281391.6        3211050    16624311.7                  0                      0              99532.905            37076159           14440303               24479240
0u0drxbt5qtqk       382840242 08-SEP-19 02.00.11.424 AM                1         3871668.37         424670    2563007.61                  0                      0               1236.003             4622248            2457057                2468983
0u0drxbt5qtqk       382840242 15-SEP-19 03.00.12.349 AM                0         5161808.16         615520    3358994.55                  0                      0              20656.365             6251060            2801828                      0
0u0drxbt5qtqk       382840242 15-SEP-19 04.00.33.661 AM                1         2412910.02         240650    1741053.89                  0                      0                699.717             3050529            1542895                4638794

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
cq433j04qgb18      1267318024 02-SEP-19 10.00.57.871 PM                1          77132.892          51180     10719.692                  0                      0                  2.003              460346              47055                 772468
cq433j04qgb18      1267318024 03-SEP-19 10.00.55.601 PM                1         116064.154          68350      9808.483                  0                      0              15746.609              911571              20422                1256808
cq433j04qgb18      1267318024 04-SEP-19 10.00.31.071 PM                1         106594.074          64030      6328.462                  0                      0              15603.102              777779              14945                1561172
cq433j04qgb18      1267318024 05-SEP-19 10.00.13.265 PM                0          44435.247          31810      2760.438                  0                      0                365.132              139637               5111                 257770
cq433j04qgb18      1267318024 09-SEP-19 10.00.18.346 PM                1         791385.928         324050    171504.931                  0                      0               7484.358             6430665             600703               14262960
cq433j04qgb18      1267318024 10-SEP-19 10.00.29.224 PM                0         1685763.14         676210    304045.354                  0                      0                283.296            11884045             838290               16268667
cq433j04qgb18      1267318024 11-SEP-19 10.01.00.800 PM                0         369644.825         172120     42679.357                  0                      0                  3.929             2443772             151369                3901044
cq433j04qgb18      1267318024 12-SEP-19 10.00.28.499 PM                0          30381.614          25630      1191.884                  0                      0                 422.55               98580               3389                 184812
cq433j04qgb18      1267318024 13-SEP-19 10.00.07.502 PM                0         173286.567         109990     11461.865                  0                      0                 359.37             1475324              63073                2360818
cq433j04qgb18      1267318024 16-SEP-19 10.00.52.175 PM                1         190203.822          93680     47585.666                  0                      0                122.658             1221886             348327                2955258

These queries run at most a couple of hours. If direct path writes are 1% of their total run time, I estimated that force logging would add about 1% to the elapsed time or about 2 minutes per execution.

The final step was to try to run one of these top nologging I/O inserts in a test environment with and without force logging to see if the test matches the expected performance slowdown. I was not able to run 0u0drxbt5qtqk without setting up a more elaborate test with the development team. My test of cq433j04qgb18 ran considerably faster with force logging than without it so I think other factors were hiding whatever effect force logging had. But 71sr61v1rmmqc had some nice results that matched my estimates well. This is on a Delphix clone of production so the data was up to date with prod but the underlying I/O was slower.

71sr61v1rmmqc results running 5 times normal 5 times force logging

The individual run times are in seconds and the averages are listed in seconds and in minutes. I ran the insert 5 times with no force logging and 5 times with it alternating. I dropped the primary key and unique index of the target table to keep from getting constraint errors. I rolled back the insert each time. It averaged about 1.2 minutes more out of 40 minutes of run time which is about a 3% increase. My estimate from ASH was about 1% so this test matches that well.

The final test remains. In some upcoming production weekend, I will put in a change to flip the database to force logging and see how it goes. My tests were run on a test system with a different storage system and with no other activity. We might see different results on a heavily loaded system with a queue for the CPU. But, after all this analysis and testing I feel confident that we won’t be able to tell that force logging is enabled. Unfortunately, we sometimes have performance issues anyway due to plan changes or data volume so the force logging switch might get blamed. But I feel confident enough to push for the final test and I think we ultimately will pass that test and be able to use force logging to enable GoldenGate to support a short cut over time for our migration and upgrade project.

Bobby

P.S. A good question came in as a comment about direct path write waits and asynchronous I/O. The system I am testing on does not support async I/O because it is HP Unix and a filesystem. This older blog post talks a bit about async and direct I/O on HP-UX:

https://www.bobbydurrettdba.com/2013/04/26/db_writer_processes-dbwr_io_slaves-with-no-asynch-io-on-hp-ux/

So, your mileage may vary (YMMV) if you do these same queries on a system with asynchronous writes. Linux filesystems support async writes and on HP-UX our RAC system on ASM supports it. It is one of the challenges of writing blog posts. Other people may be in different situations than I am.

Categories: DBA Blogs

Rolling Upgrade of a Galera Cluster with ClusterControl

Yann Neuhaus - Mon, 2019-09-23 03:29
Rolling Upgrade is easy

In this blog, I will show you how easy it is with Clustercontrol to perform a Galera cluster “Rolling Upgrade” without any loss of service.
Let’s say we want to upgrade from Percona XtraDB Cluster version 5.6 to 5.7.
The same procedure can be used to upgrade MariaDB (from 10.1 to 10.2 or 10.3)

Prerequisites

First of all, make sure that on your Galera cluster all nodes are synchronized.
From the Dashboard, on the tab Overview, then the Galera Nodes window, in the Last “Committed” column , all 3 figures must be identical.

Then disable from the GUI the “cluster & node auto-recovery”either by clicking on both until it gets red or by setting temporarely on the clustercontrol server in the
/etc/cmon.d/cmon_N.cnf file (N stands for Cluster ID) the 2 following parameters:
– enable_cluster_autorecovery=0
– enable_node_autorecovery=0
don’t forget to restart the cmon service.
systemctl restart cmon
It is very important & even crucial otherwise Clustercontrol will try everytime to restart the Galera node when you will stop it during the upgrade process.

Now we have to put the first Galera node in maintenance mode for one hour.


The Cluster status bar should be now as following.

Cluster Upgrade

Log on the first master node using your favorite terminal emulator “putty” or “MobaXterm”, open 2 sessions and stop the Percona service on the first node.
# service mysql status
SUCCESS! MySQL (Percona XtraDB Cluster) running (19698)
# service mysql stop
Shutting down MySQL (Percona XtraDB Cluster).. SUCCESS!
# service mysql status
ERROR! MySQL (Percona XtraDB Cluster) is not running

Remove now existing Percona XtraDB Cluster and Percona XtraBackup packages
#[root@node1 yum.repos.d]# yum remove percona-xtrabackup* Percona-XtraDB-Cluster*
Loaded plugins: fastestmirror, ovl
Setting up Remove Process
Resolving Dependencies
--> Running transaction check
---> Package Percona-XtraDB-Cluster-56.x86_64 1:5.6.44-28.34.1.el6 will be erased
---> Package Percona-XtraDB-Cluster-client-56.x86_64 1:5.6.44-28.34.1.el6 will be erased
---> Package Percona-XtraDB-Cluster-galera-3.x86_64 0:3.34-1.el6 will be erased
---> Package Percona-XtraDB-Cluster-server-56.x86_64 1:5.6.44-28.34.1.el6 will be erased
---> Package Percona-XtraDB-Cluster-shared-56.x86_64 1:5.6.44-28.34.1.el6 will be erased
---> Package percona-xtrabackup.x86_64 0:2.3.10-1.el6 will be erased
--> Finished Dependency Resolution
Removed:
Percona-XtraDB-Cluster-56.x86_64 1:5.6.44-28.34.1.el6 Percona-XtraDB-Cluster-client-56.x86_64 1:5.6.44-28.34.1.el6 Percona-XtraDB-Cluster-galera-3.x86_64 0:3.34-1.el6
Percona-XtraDB-Cluster-server-56.x86_64 1:5.6.44-28.34.1.el6 Percona-XtraDB-Cluster-shared-56.x86_64 1:5.6.44-28.34.1.el6 percona-xtrabackup.x86_64 0:2.3.10-1.el6
Complete!

Install the new packages
#[root@node1 yum.repos.d]# yum install Percona-XtraDB-Cluster-57
Resolving Dependencies
--> Running transaction check
---> Package Percona-XtraDB-Cluster-57.x86_64 0:5.7.26-31.37.1.el6 will be installed
--> Processing Dependency: Percona-XtraDB-Cluster-client-57 = 5.7.26-31.37.1.el6 for package: Percona-XtraDB-Cluster-57-5.7.26-31.37.1.el6.x86_64
--> Processing Dependency: Percona-XtraDB-Cluster-server-57 = 5.7.26-31.37.1.el6 for package: Percona-XtraDB-Cluster-57-5.7.26-31.37.1.el6.x86_64
--> Running transaction check
---> Package Percona-XtraDB-Cluster-client-57.x86_64 0:5.7.26-31.37.1.el6 will be installed
---> Package Percona-XtraDB-Cluster-server-57.x86_64 0:5.7.26-31.37.1.el6 will be installed
--> Processing Dependency: Percona-XtraDB-Cluster-shared-57 = 5.7.26-31.37.1.el6 for package: Percona-XtraDB-Cluster-server-57-5.7.26-31.37.1.el6.x86_64
--> Processing Dependency: percona-xtrabackup-24 >= 2.4.12 for package: Percona-XtraDB-Cluster-server-57-5.7.26-31.37.1.el6.x86_64
--> Processing Dependency: qpress for package: Percona-XtraDB-Cluster-server-57-5.7.26-31.37.1.el6.x86_64
--> Running transaction check
---> Package Percona-XtraDB-Cluster-shared-57.x86_64 0:5.7.26-31.37.1.el6 will be installed
---> Package percona-xtrabackup-24.x86_64 0:2.4.15-1.el6 will be installed
---> Package qpress.x86_64 0:11-1.el6 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
Installed:
Percona-XtraDB-Cluster-57.x86_64 0:5.7.26-31.37.1.el6
Dependency Installed:
Percona-XtraDB-Cluster-client-57.x86_64 0:5.7.26-31.37.1.el6 Percona-XtraDB-Cluster-server-57.x86_64 0:5.7.26-31.37.1.el6
Percona-XtraDB-Cluster-shared-57.x86_64 0:5.7.26-31.37.1.el6 percona-xtrabackup-24.x86_64 0:2.4.15-1.el6
qpress.x86_64 0:11-1.el6
Complete!

Start the node outside the cluster (in standalone mode) by setting the wsrep_provider variable to none.
$ ps -edf|grep -i mysql
$ mysqld --user=mysql --wsrep-provider='none'

Run now mysql_upgrade in the second session
[root@node1 /]# mysql_upgrade -u root -p
Enter password:
Checking if update is needed.
Checking server version.
Running queries to upgrade MySQL server.
Checking system database.
mysql.columns_priv OK
mysql.db OK
mysql.engine_cost OK
mysql.user OK
Upgrading the sys schema.
Checking databases.
sys.sys_config OK
Upgrade process completed successfully.
Checking if update is needed

When the upgrade is over, stop the mysqld process.
You can either kill the mysqld process ID or use mysqladmin shutdown with the MySQL root user credentials.
$ mysqladmin shutdown -uroot -p
Now you can restart the upgraded node to join the Galera cluster in the first session.
$ service mysql start
Starting Percona-Xtradb-server.190612 13:04:33 mysqld_safe Logging to '/var/log/mysql/mysqld.log'.
190612 13:04:33 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
.. SUCCESS!

Post-exec tasks

From the GUI, disable the maintenance mode and check for the new version by logging in the instance.
[root@node1 /]# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 64
Server version: 5.7.27-30-57 Percona XtraDB Cluster (GPL), Release rel30, Revision 64987d4, WSREP version 31.39, wsrep_31.39
Once the first node is upgraded, you can repeat exactly the same procedure for all the other nodes in the cluster.

Now you can repeat exactly the same procedure for the other nodes of the cluster. At the end, Clustercontrol should display the same version for all nodes.

Conclusion

Rolling upgrade of a Galera cluster with Clustercontrol is really easy and fast with no or very few impact on the service.

Cet article Rolling Upgrade of a Galera Cluster with ClusterControl est apparu en premier sur Blog dbi services.

Oracle 19c : Point-In-Time Recovery in a PDB

Yann Neuhaus - Fri, 2019-09-20 11:51

Point-In-Time Recovery is also possible in a multitenant environment. As in Non-CDB, a recovery catalog can be used or not. In this blog we will see how to recover a dropped tablespace in a PDB. We will also see the importance of using a recovery catalog or not.
A PITR of a PDB does not affect remaining PBDs. That means that while doing a PITR in PDB, people can use the other PDBs. In this blog we are using an oracle 19c database with local undo mode enabled

SQL> 
  1  SELECT property_name, property_value
  2  FROM   database_properties
  3* WHERE  property_name = 'LOCAL_UNDO_ENABLED'

PROPERTY_NAME        PROPE
-------------------- -----
LOCAL_UNDO_ENABLED   TRUE
SQL>

SELECT con_id, tablespace_name FROM   cdb_tablespaces WHERE  tablespace_name LIKE 'UNDO%';

    CON_ID TABLESPACE_NAME
---------- ------------------------------
         3 UNDOTBS1
         4 UNDOTBS1
         1 UNDOTBS1

SQL>

We suppose that
-We have a tablespace named MYTABPDB2
-We have a valid backup of the whole database
-A recovery catalog is not used

Now connecting to the PDB2, let’s drop a tablespace after creating a restore point.

SQL> show con_name;

CON_NAME
------------------------------
PDB2

SQL> create restore point myrestpoint;

Restore point created.

SQL>
SQL> drop tablespace mytabpdb2 including contents and datafiles;

Tablespace dropped.

SQL>

And now let’s perform a PITR to the restore point myrestpoint

1- Connect to the root container

[oracle@oraadserver ~]$ rman target /

[oracle@oraadserver ~]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Fri Sep 20 13:07:07 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1546409981)

RMAN>

2- Close the PDB

RMAN> ALTER PLUGGABLE DATABASE PDB2 close;

using target database control file instead of recovery catalog
Statement processed

RMAN>

3- Do the PITR

RMAN> run
{
  SET TO RESTORE POINT myrestpoint;
   RESTORE PLUGGABLE DATABASE pdb2;
   RECOVER PLUGGABLE DATABASE pdb2;
}2> 3> 4> 5> 6>

executing command: SET until clause

Starting restore at 20-SEP-19
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=54 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00013 to /u01/app/oracle/oradata/ORCL/pdb2/system01.dbf
channel ORA_DISK_1: restoring datafile 00014 to /u01/app/oracle/oradata/ORCL/pdb2/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00015 to /u01/app/oracle/oradata/ORCL/pdb2/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00016 to /u01/app/oracle/oradata/ORCL/pdb2/users01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_20/o1_mf_nnndf_TAG20190920T141945_gr9jzry9_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_20/o1_mf_nnndf_TAG20190920T141945_gr9jzry9_.bkp tag=TAG20190920T141945
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 20-SEP-19

Starting recover at 20-SEP-19
current log archived
using channel ORA_DISK_1


starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 20-SEP-19

RMAN>

4- Open the PDB on resetlogs mode

RMAN> alter pluggable DATABASE  pdb2 open resetlogs;

Statement processed

RMAN>

I did not get any error from RMAN, but when looking the alert log file, I have following errors

PDB2(4):Pluggable database PDB2 dictionary check beginning
PDB2(4):Tablespace 'MYTABPDB2' #7 found in data dictionary,
PDB2(4):but not in the controlfile. Adding to controlfile.
PDB2(4):File #25 found in data dictionary but not in controlfile.
PDB2(4):Creating OFFLINE file 'MISSING00025' in the controlfile.
PDB2(4):Pluggable Database PDB2 Dictionary check complete
PDB2(4):Database Characterset for PDB2 is AL32UTF8

Seems there is some issue with the recovery of MYTABPDB2 tablespace. Connected to PDB2 I can have

SQL> select FILE_NAME,TABLESPACE_NAME from dba_data_files where TABLESPACE_NAME='MYTABPDB2';

FILE_NAME
--------------------------------------------------------------------------------
TABLESPACE_NAME
------------------------------
/u01/app/oracle/product/19.0.0/dbhome_3/dbs/MISSING00025
MYTABPDB2

The tablespace was not recovered as expected.
What happens? In fact this issue is expected according Doc ID 2435452.1 where we can find
If the point in time recovery of the pluggable database is performed without the catalog, then it is expected to fail

As we are not using a recovery catalog, backup information are stored in the control file and it seems that the actual control file is no longer aware of the data file 25.
As specified in the document, we have to use a recovery catalog

Now let’s connect to a catalog and do again the same PITR
After connecting to the catalog we do a full backup. Then we drop the tablespace and run again the same recovery command while connecting to the catalog. We use the time before the tablespace was dropped.

[oracle@oraadserver trace]$ rman catalog rman/rman@rmancat

Recovery Manager: Release 19.0.0.0.0 - Production on Fri Sep 20 15:28:29 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to recovery catalog database

RMAN> connect target /

connected to target database: ORCL (DBID=1546409981)

After closing PDB2 we run following bloc

RMAN> run
{
  SET UNTIL TIME "to_date('20-SEP-2019 15:27:00','DD-MON-YYYY HH24:MI:SS')";
   RESTORE PLUGGABLE DATABASE pdb2;
   RECOVER PLUGGABLE DATABASE pdb2;
}
2> 3> 4> 5> 6>
executing command: SET until clause

Starting restore at 20-SEP-19
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00013 to /u01/app/oracle/oradata/ORCL/pdb2/system01.dbf
channel ORA_DISK_1: restoring datafile 00014 to /u01/app/oracle/oradata/ORCL/pdb2/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00015 to /u01/app/oracle/oradata/ORCL/pdb2/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00016 to /u01/app/oracle/oradata/ORCL/pdb2/users01.dbf
channel ORA_DISK_1: restoring datafile 00026 to /u01/app/oracle/oradata/ORCL/pdb2/mytabpdb201.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_20/o1_mf_nnndf_TAG20190920T152554_gr9nws0x_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_20/o1_mf_nnndf_TAG20190920T152554_gr9nws0x_.bkp tag=TAG20190920T152554
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15

datafile 26 switched to datafile copy
input datafile copy RECID=5 STAMP=1019489668 file name=/u01/app/oracle/oradata/ORCL/pdb2/mytabpdb201.dbf
Finished restore at 20-SEP-19
starting full resync of recovery catalog
full resync complete

Starting recover at 20-SEP-19
using channel ORA_DISK_1


starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 20-SEP-19

RMAN>

We then open PDB2 with resetlogs mode and then verify with sqlplus

SQL> select FILE_NAME,TABLESPACE_NAME from dba_data_files where TABLESPACE_NAME='MYTABPDB2';

FILE_NAME
--------------------------------------------------------------------------------
TABLESPACE_NAME
------------------------------
/u01/app/oracle/oradata/ORCL/pdb2/mytabpdb201.dbf
MYTABPDB2


SQL>

And this time the PITR works fine. The tablespace was restored.

Conclusion

As seen in this blog, it is recommended to use a recovery catalog when coming to do some PITR operations in a multitenant environment.

Cet article Oracle 19c : Point-In-Time Recovery in a PDB est apparu en premier sur Blog dbi services.

OpenWorld and Code One 2019 : The Journey Home

Tim Hall - Fri, 2019-09-20 09:06

I got up at a reasonable time and got caught up with blog posts, then it was time to check out and get the BART to the airport. Bag drop was empty, because the rest of the planet was waiting at security. After what felt like an eternity I was through security and sat down and waited for my plane…

We boarded the flight from San Francisco to Amsterdam on time and didn’t have a significant wait for the departure slot, so the captain said we would arrive early. No luck with a spare seat on this flight. The guy next to me was about my size, but wasn’t making an effort to stay in his space. There was some serious man-spreading going on. I ended up spending most of the flight leaning into the aisle and pulling my arm across my body, so my left elbow feels knackered now. Doing that for 11 hours is not fun. I managed to watch the following films.

  • The Shape of Water – I love this film. I’ve seen it a load of times.
  • Rocketman – I wasn’t feeling this at the start. I’m not big on musicals, and I didn’t like the stuff when he was a kid. Once Taron Egerton started playing him it was cool. I kind-of forgot he wasn’t Elton John. If you can get past the start, it’s worth a go!
  • The Accountant – I liked it. Ben Affleck doing deadpan and expressionless is the perfect role for him.
  • John Wick: Chapter 3 – Parabellum – I got up to the final sequence, so I’m not sure how it ends. Pretty much the same as the previous films, which I liked. Just crazy fight scenes with loads of guns.

There was one bit of the flight that was odd. The in-flight entertainment died, then we hit some turbulence. Queue me deciding it was linked and we were all going to die… Pretty soon the turbulence stopped, then after about 10 minutes the screens rebooted…

I had quite a long wait at Schiphol. About 3 hours. That was pretty dull, but what are you going to do?

The flight from Amsterdam to Birmingham was delayed by a few minutes, then the was the issue of people trying to board with 15 pieces of hand luggage and a donkey. I had my bag on my feet. Luckily it was only an hour flight.

II was originally planning to get the train home, but I was so tired I got a taxi. The driver was a nice guy and we had a chat about his kids and future plans, which is a lot nicer than listening to me drone on…

I’m now home and started doing the washing…

I’ll do a wrap-up post tomorrow, with some thoughts about the event…

Cheers

Tim…

OpenWorld and Code One 2019 : The Journey Home was first posted on September 20, 2019 at 3:06 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

OpenShift 4.1 Partner Introduction Day

Yann Neuhaus - Wed, 2019-09-18 12:57

Today was the OpenShift Partner Introduction Day by RedHat. This event happened at Lausanne
There were people with different backgrounds.

After presenting the RedHat compagny, the speaker explained what is Openshifts and why people must adopt it.
What we will retain is that OpenShift is trusted enterprise Kubernetes

With OpenShift we can for example
-Automated, full-stack installation from the container host to application services
-Seamless Kubernetes deployment to any cloud or on-premises environment
-Autoscaling of cloud resources
-One-click updates for platform, services,and applications

The new features in the version 4.1 were presented. The speaker also showed the Red Hat Openshift business value

The notions of Ansible Playbooks, Operator, CRI-O, Helm … were also explained.
The speaker also did a little demonstration of creating a small project with OpenShift.
Below the speaker during the demonstration

This was a very interesting event. It was general and allowed people to understand where Openshift is located in the architecture of containerization. But we have to retain that there are lot of components to understand when using OpenShift

Cet article OpenShift 4.1 Partner Introduction Day est apparu en premier sur Blog dbi services.

VirtualBox 6.0.12

Tim Hall - Tue, 2019-09-17 11:35

I know I’ve been distracted with the lead up to OpenWorld and Code One 2019, but how did I miss this release? VirtualBox 6.0.12 arrived two weeks ago.

The downloads and changelog are in the usual places.

Being a reckless type, I downloaded it and installed it on my Windows 10m laptop this morning. I’ve got a live demo in 2 hours!

The install was fine and my Vagrant VMs start with no problems. More extensive testing and installations on Oracle Linux and macOS hosts will happen when I get home, but so far so good!

Cheers

Tim…

VirtualBox 6.0.12 was first posted on September 17, 2019 at 5:35 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

One More Thing: New Oracle Cloud free tier better than AWS free tier

Iggy Fernandez - Mon, 2019-09-16 19:20
Larry Ellison just concluded his Oracle OpenWorld keynote with the announcement of an Oracle Cloud free tier that is better than the AWS free tier. The Oracle Cloud free tier never expires and includes the crown jewels. The slides say it all.
Categories: DBA Blogs

Get Azure Networking Hierarchy Components With Powershell

Jeff Moss - Mon, 2019-09-16 02:26

I needed to see what VNETs, their subnets and the NIC/IPs attached to those subnets which is all available in Azure Portal but I wanted a nice hierarchical listing so here is a Powershell script for that…

$vnets = Get-AzVirtualNetwork
foreach ($vnet in $vnets)
{
"VNET: " + $vnet.Name
"Subnet Count: " + $vnet.Subnets.count
foreach ($subnet in $vnet.Subnets)
{
"..Subnet: " + $subnet.Name
if ($Subnet.IpConfigurations.count -eq 0)
{"Subnet has no IpConfigurations"}
else
{
foreach ($IpConfiguration in $Subnet.IpConfigurations)
{
"…." + $IpConfiguration.Id.substring($IpConfiguration.Id.indexof("resourceGroups")+15)
}
}
}
}

code

The output looks like this (redacted):

VNET: myvnet1
Subnet Count: 1
..Subnet: mysubnet1
Subnet has no IpConfigurations
VNET: myvnet2
Subnet Count: 1
..Subnet: mysubnet2
….myresourcegroup2/providers/Microsoft.Network/networkInterfaces/myvm1-nic1/ipConfigurations/ipconfig1
….myresourcegroup2/providers/Microsoft.Network/networkInterfaces/myvm2-nic1/ipConfigurations/ipconfig1
VNET: myvnet3
Subnet Count: 0

Duplicate OMF DB Using Backupset To New Host And Directories

Michael Dinh - Sat, 2019-09-14 10:23

Database 12.1.0.2.0 is created using OMF.

At SOURCE:
db_create_file_dest=/u01/app/oracle/oradata
db_recovery_file_dest=/u01/app/oracle/fast_recovery_area

At DESTINATION:
db_create_file_dest=/u01/oradata
db_recovery_file_dest=/u01/fast_recovery_area

BACKUP LOCATION on shared storage:
/media/shared_storage/rman_backup/EMU

I has to explicitly set control_files since I was not able to determine how it can be done automatically.

set control_files='/u01/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl','/u01/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl'

If don’t set_controlfiles then duplicate will restore to original locations.

channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u01/app/oracle/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl
output file name=/u01/app/oracle/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl

STEPS:

================================================================================
### SOURCE: Backup Database
================================================================================

--------------------------------------------------
### Retrieve controlfile locations.
--------------------------------------------------

SQL> select name from v$controlfile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl
/u01/app/oracle/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl

SQL>

--------------------------------------------------
### Retrieve redo logs locations.
--------------------------------------------------

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/EMU/onlinelog/o1_mf_1_gqsq2mvy_.log
/u01/app/oracle/fast_recovery_area/EMU/onlinelog/o1_mf_1_gqsq2myy_.log
/u01/app/oracle/oradata/EMU/onlinelog/o1_mf_2_gqsq2n1o_.log
/u01/app/oracle/fast_recovery_area/EMU/onlinelog/o1_mf_2_gqsq2n3g_.log
/u01/app/oracle/oradata/EMU/onlinelog/o1_mf_3_gqsq2n50_.log
/u01/app/oracle/fast_recovery_area/EMU/onlinelog/o1_mf_3_gqsq2nql_.log

6 rows selected.

SQL>

--------------------------------------------------
### Backup database.
--------------------------------------------------

[oracle@ol741 EMU]$ export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
[oracle@ol741 EMU]$ rman @ backup.rman

Recovery Manager: Release 12.1.0.2.0 - Production on Sat Sep 14 16:09:06 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

RMAN> spool log to /media/shared_storage/rman_backup/EMU/rman_EMU_level0.log
2> set echo on
3> connect target;
4> show all;
5> set command id to "BACKUP_EMU";
6> run {
7> allocate channel d1 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
8> allocate channel d2 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
9> allocate channel d3 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
10> allocate channel d4 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
11> allocate channel d5 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
12> backup as compressed backupset incremental level 0 check logical database filesperset 1 tag="EMU"
13> plus archivelog filesperset 8 tag="EMU"
14> ;
15> }
16> alter database backup controlfile to trace as '/media/shared_storage/rman_backup/EMU/cf_@.sql' REUSE RESETLOGS;
17> create pfile='/media/shared_storage/rman_backup/EMU/init@.ora' from spfile;
18> list backup summary tag="EMU";
19> list backup of spfile tag="EMU";
20> list backup of controlfile tag="EMU";
21> report schema;
22> exit

--------------------------------------------------
### Retrive datafiles and tempfiles locations.
--------------------------------------------------

[oracle@ol741 EMU]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Sat Sep 14 16:09:44 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: EMU (DBID=3838062773)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name EMU

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    700      SYSTEM               YES     /u01/app/oracle/oradata/EMU/datafile/o1_mf_system_gqsq2qw4_.dbf
2    550      SYSAUX               NO      /u01/app/oracle/oradata/EMU/datafile/o1_mf_sysaux_gqsq30xo_.dbf
3    265      UNDOTBS1             YES     /u01/app/oracle/oradata/EMU/datafile/o1_mf_undotbs1_gqsq3875_.dbf
4    5        USERS                NO      /u01/app/oracle/oradata/EMU/datafile/o1_mf_users_gqsq405f_.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       /u01/app/oracle/oradata/EMU/datafile/o1_mf_temp_gqsq3c3d_.tmp

RMAN> exit

Recovery Manager complete.
[oracle@ol741 EMU]$


================================================================================
### TARGET: Restore Database
================================================================================

--------------------------------------------------
### Create pfile.
--------------------------------------------------

[oracle@ol742 dbs]$ pwd
/u01/app/oracle/product/12.1.0.2/db_1/dbs

[oracle@ol742 dbs]$ cat initemu.ora
*.db_name='emu'
[oracle@ol742 dbs]$

--------------------------------------------------
### Startup nomount.
--------------------------------------------------

[oracle@ol742 EMU]$ . oraenv << startup nomount;
ORACLE instance started.

Total System Global Area  234881024 bytes
Fixed Size                  2922904 bytes
Variable Size             176162408 bytes
Database Buffers           50331648 bytes
Redo Buffers                5464064 bytes
SQL>

--------------------------------------------------
### Create new directories.
--------------------------------------------------

[oracle@ol742 EMU]$ mkdir -p /u01/oradata/EMU/controlfile/
[oracle@ol742 EMU]$ mkdir -p /u01/fast_recovery_area/EMU/controlfile

--------------------------------------------------
### Duplicate database.
--------------------------------------------------

[oracle@ol742 EMU]$ export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
[oracle@ol742 EMU]$ rman @ dup_omf_bks.rman

Recovery Manager: Release 12.1.0.2.0 - Production on Sat Sep 14 16:37:51 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

RMAN> spool log to /media/shared_storage/rman_backup/EMU/rman_duplicate_database.log
2> set echo on
3> connect auxiliary *
4> show all;
5> set command id to "DUPLICATE_EMU";
6> DUPLICATE DATABASE TO emu
7>   SPFILE
8>   set db_file_name_convert='/u01/app/oracle/oradata','/u01/oradata'
9>   set log_file_name_convert='/u01/app/oracle/oradata','/u01/oradata'
10>   set db_create_file_dest='/u01/oradata'
11>   set db_recovery_file_dest='/u01/fast_recovery_area'
12>   set control_files='/u01/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl','/u01/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl'
13>   BACKUP LOCATION '/media/shared_storage/rman_backup/EMU'
14>   NOFILENAMECHECK
15> ;
16> exit

--------------------------------------------------
### Retrive datafiles and tempfiles locations.
--------------------------------------------------

[oracle@ol742 EMU]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Sat Sep 14 16:40:33 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: EMU (DBID=3838070815)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name EMU

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    700      SYSTEM               YES     /u01/oradata/EMU/datafile/o1_mf_system_gqsyvo1y_.dbf
2    550      SYSAUX               NO      /u01/oradata/EMU/datafile/o1_mf_sysaux_gqsywg40_.dbf
3    265      UNDOTBS1             YES     /u01/oradata/EMU/datafile/o1_mf_undotbs1_gqsywx7n_.dbf
4    5        USERS                NO      /u01/oradata/EMU/datafile/o1_mf_users_gqsyxd90_.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       /u01/oradata/EMU/datafile/o1_mf_temp_gqsyy469_.tmp

RMAN> exit


Recovery Manager complete.
[oracle@ol742 EMU]$

--------------------------------------------------
### Retrieve controlfile locations.
--------------------------------------------------

SQL> select name from v$controlfile;

NAME
--------------------------------------------------------------------------------
/u01/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl
/u01/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl

SQL>

--------------------------------------------------
### Retrieve redo logs locations.
--------------------------------------------------

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
/u01/oradata/EMU/onlinelog/o1_mf_3_gqsyy2kh_.log
/u01/fast_recovery_area/EMU/onlinelog/o1_mf_3_gqsyy2lx_.log
/u01/oradata/EMU/onlinelog/o1_mf_2_gqsyy165_.log
/u01/fast_recovery_area/EMU/onlinelog/o1_mf_2_gqsyy17p_.log
/u01/oradata/EMU/onlinelog/o1_mf_1_gqsyxztc_.log
/u01/fast_recovery_area/EMU/onlinelog/o1_mf_1_gqsyxzw3_.log

6 rows selected.

SQL>

Logs:

rman_EMU_level0.log

rman_duplicate_database.log

 

Oracle Sign up: more problems

Dietrich Schroff - Fri, 2019-09-13 14:03
I thought, i was successful, but:

I received a mail with the following content:

"We have re-authorized a new, specific amount on the credit/debit card used during the sign up process."

and

"To verify the account you have created, please confirm the specific amount re-authorized."


My problem: there is not any "re-authorized amount" on my banking account. I do not know, what is "re-authorized"?
Is this: this amount is charged on my credit card (then i should see it).
Or is this process buggy and i was for some reason not charged?
Or is re-authorization something else?



RMAN in a Multitenant Environment

Yann Neuhaus - Fri, 2019-09-13 12:13

The Oracle Multitenant architecture came with Oracle 12c a few years ago. For people usually working with traditional Non-CDB database it might be confusing the first time to do Backup and Recovery with pluggable databases (PDBs)
In this document we are trying to explain how to use RMAN backup and recovery in a multitenant environment for an oracle 19c database with 2 pluggable databases.
Below the configuration we are using.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB2                           READ WRITE NO

We will not use a recovery catalog, but the use of recovery is the same than in a non-CDB environment.
Note that starting with Oracle 19c, we can now connect to a recovery catalog when the target database is a PDB.

Whole CDB Backups

Backin up a whole CDB is like backing up non-CDB database. We have to backup
-root pdb
-all pluggable databases
-archived logs
The steps are:
1- Connect to the root container with a common user having SYSBACKUP or SYSDBA privileges

RMAN> connect target /

connected to target database: ORCL (DBID=1546409981)

RMAN>

2- Launch the backup

RMAN> BACKUP DATABASE PLUS ARCHIVELOG;

Starting backup at 11-SEP-19
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=279 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=6 RECID=1 STAMP=1018632767
input archived log thread=1 sequence=7 RECID=2 STAMP=1018690452
input archived log thread=1 sequence=8 RECID=3 STAMP=1018691169
input archived log thread=1 sequence=9 RECID=4 STAMP=1018693343
channel ORA_DISK_1: starting piece 1 at 11-SEP-19
channel ORA_DISK_1: finished piece 1 at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2019_09_11/o1_mf_annnn
...
...
Finished backup at 11-SEP-19

Starting Control File and SPFILE Autobackup at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/autobackup/2019_09_11/o1_mf_s_1018693411_gqkcr46z_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 11-SEP-19

Oracle also recommends to backup sometimes the root container.
Once connected to the root container with a common user, run the backup command

RMAN> BACKUP DATABASE ROOT;

Starting backup at 11-SEP-19
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/ORCL/system01.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ORCL/sysaux01.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ORCL/undotbs01.dbf
input datafile file number=00007 name=/u01/app/oracle/oradata/ORCL/users01.dbf
channel ORA_DISK_1: starting piece 1 at 11-SEP-19
channel ORA_DISK_1: finished piece 1 at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T103019_gqkd4vxb_.bkp tag=TAG20190911T103019 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:16
Finished backup at 11-SEP-19

Starting Control File and SPFILE Autobackup at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/autobackup/2019_09_11/o1_mf_s_1018693836_gqkd5d65_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 11-SEP-19

RMAN>
PDBSs Backups

Backing up PDBs is not difficult, there are just some mechanisms to know. When connecting with RMAN to the root container, we can back up one or more PDBs while directly connecting to a PDB, we can only back up this PDB.
1- Connect to the root container with a common user having SYSBACKUP or SYSDBA privileges

RMAN> connect target /

connected to target database: ORCL (DBID=1546409981)

RMAN>

And backup individual PDBs

RMAN> BACKUP PLUGGABLE DATABASE PDB1,PDB2;

Starting backup at 11-SEP-19
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
...
...
Starting Control File and SPFILE Autobackup at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/autobackup/2019_09_11/o1_mf_s_1018695111_gqkff85l_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 11-SEP-19

RMAN>

2- Connecting to the PDBs with a local user having SYSBACKUP or SYSDBA privileges
PDB1

RMAN> connect target sys/root@pdb1

connected to target database: ORCL:PDB1 (DBID=4178439423)

RMAN> BACKUP DATABASE;

Starting backup at 11-SEP-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=279 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00010 name=/u01/app/oracle/oradata/ORCL/pdb1/sysaux01.dbf
input datafile file number=00009 name=/u01/app/oracle/oradata/ORCL/pdb1/system01.dbf
input datafile file number=00011 name=/u01/app/oracle/oradata/ORCL/pdb1/undotbs01.dbf
input datafile file number=00012 name=/u01/app/oracle/oradata/ORCL/pdb1/users01.dbf
channel ORA_DISK_1: starting piece 1 at 11-SEP-19
channel ORA_DISK_1: finished piece 1 at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359B0BEC8B4545E0531502A8C0F64E/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T110707_gqkg9w5n_.bkp tag=TAG20190911T110707 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 11-SEP-19

RMAN>

PDB2

RMAN> connect target sys/root@pdb2

connected to target database: ORCL:PDB2 (DBID=3996013191)

RMAN>  BACKUP DATABASE;

Starting backup at 11-SEP-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=279 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00014 name=/u01/app/oracle/oradata/ORCL/pdb2/sysaux01.dbf
input datafile file number=00013 name=/u01/app/oracle/oradata/ORCL/pdb2/system01.dbf
input datafile file number=00015 name=/u01/app/oracle/oradata/ORCL/pdb2/undotbs01.dbf
input datafile file number=00016 name=/u01/app/oracle/oradata/ORCL/pdb2/users01.dbf
channel ORA_DISK_1: starting piece 1 at 11-SEP-19
channel ORA_DISK_1: finished piece 1 at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T110844_gqkgdwmm_.bkp tag=TAG20190911T110844 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 11-SEP-19

RMAN>
Tablespace Backup in a PDB

Tablespaces in different PDBs can have the same name. So to eliminate ambiguity always connect to the PDB you want to back up tablespaces.
1- Connect to the PDB with a local user having SYSBACKUP or SYSDBA privilege

[oracle@oraadserver ~]$ rman target sys/root@pdb1

Recovery Manager: Release 19.0.0.0.0 - Production on Wed Sep 11 11:35:53 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL:PDB1 (DBID=4178439423)

2- Issue the BACKUP TABLESPACE command

RMAN> BACKUP TABLESPACE USERS;

Starting backup at 11-SEP-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=290 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00012 name=/u01/app/oracle/oradata/ORCL/pdb1/users01.dbf
channel ORA_DISK_1: starting piece 1 at 11-SEP-19
channel ORA_DISK_1: finished piece 1 at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359B0BEC8B4545E0531502A8C0F64E/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T113623_gqkj0qxl_.bkp tag=TAG20190911T113623 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 11-SEP-19

RMAN>
Data File Backup in a PDB

Data Files are identified by a unique number across the CDB. So for the Backup we can connect either to the root container or directly to the PDB.
Note that while directly connecting to the PDB, we can only backup files belonging to this PDB.

1- Connect to the root container with a common user having SYSBACKUP or SYSDBA privileges

[oracle@oraadserver admin]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Wed Sep 11 11:54:42 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1546409981)

2- Backup the Data File

RMAN> BACKUP DATAFILE 10;

Starting backup at 11-SEP-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=274 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00010 name=/u01/app/oracle/oradata/ORCL/pdb1/sysaux01.dbf
channel ORA_DISK_1: starting piece 1 at 11-SEP-19
channel ORA_DISK_1: finished piece 1 at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359B0BEC8B4545E0531502A8C0F64E/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T115504_gqkk3s44_.bkp tag=TAG20190911T115504 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 11-SEP-19

Starting Control File and SPFILE Autobackup at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/autobackup/2019_09_11/o1_mf_s_1018698908_gqkk3wwt_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 11-SEP-19

RMAN>
Whole CDB Complete Recovery

Suppose we lose all Data Files, Control Files and Redo Log Files of the whole container. We can restore with the following steps
1- Restore Control Files while connecting to the root container

[oracle@oraadserver ORCL]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Wed Sep 11 14:25:25 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup nomount

Oracle instance started

Total System Global Area     872413680 bytes

Fixed Size                     9140720 bytes
Variable Size                297795584 bytes
Database Buffers             557842432 bytes
Redo Buffers                   7634944 bytes

RMAN>

RMAN> restore controlfile FROM AUTOBACKUP;

Starting restore at 11-SEP-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=21 device type=DISK

recovery area destination: /u01/app/oracle/fast_recovery_area
database name (or database unique name) used for search: ORCL
channel ORA_DISK_1: AUTOBACKUP /u01/app/oracle/fast_recovery_area/ORCL/autobackup/2019_09_11/o1_mf_s_1018698908_gqkk3wwt_.bkp found in the recovery area
AUTOBACKUP search with format "%F" not attempted because DBID was not set
channel ORA_DISK_1: restoring control file from AUTOBACKUP /u01/app/oracle/fast_recovery_area/ORCL/autobackup/2019_09_11/o1_mf_s_1018698908_gqkk3wwt_.bkp
channel ORA_DISK_1: control file restore from AUTOBACKUP complete
output file name=/u01/app/oracle/oradata/ORCL/control01.ctl
output file name=/u01/app/oracle/fast_recovery_area/ORCL/control02.ctl
Finished restore at 11-SEP-19
RMAN>

2- Mount the CDB

RMAN> alter database mount;

released channel: ORA_DISK_1
Statement processed

3- List Backup of archived logs

RMAN> list backup of archivelog all;


List of Backup Sets
===================


BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
2       397.52M    DISK        00:00:04     11-SEP-19
        BP Key: 2   Status: AVAILABLE  Compressed: NO  Tag: TAG20190911T102225
        Piece Name: /u01/app/oracle/fast_recovery_area/ORCL/backupset/2019_09_11/o1_mf_annnn_TAG20190911T102225_gqkcp1k9_.bkp

  List of Archived Logs in backup set 2
  Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    6       2120330    10-SEP-19 2155559    10-SEP-19
  1    7       2155559    10-SEP-19 2257139    11-SEP-19
  1    8       2257139    11-SEP-19 2327294    11-SEP-19
  1    9       2327294    11-SEP-19 2342937    11-SEP-19

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
7       5.00K      DISK        00:00:00     11-SEP-19
        BP Key: 7   Status: AVAILABLE  Compressed: NO  Tag: TAG20190911T102330
        Piece Name: /u01/app/oracle/fast_recovery_area/ORCL/backupset/2019_09_11/o1_mf_annnn_TAG20190911T102330_gqkcr2n1_.bkp

  List of Archived Logs in backup set 7
  Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    10      2342937    11-SEP-19 2342996    11-SEP-19

RMAN>

4- Restore the database according to the sequence

RMAN> restore database until sequence 11;

Starting restore at 11-SEP-19
Starting implicit crosscheck backup at 11-SEP-19
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=24 device type=DISK
Crosschecked 18 objects
...
...
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 11-SEP-19

RMAN>

5- Recover the database

RMAN> recover database until sequence 11;

Starting recover at 11-SEP-19
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 10 is already on disk as file /u01/app/oracle/fast_recovery_area/ORCL/archivelog/2019_09_11/o1_mf_1_10_gqkcr27d_.arc
archived log file name=/u01/app/oracle/fast_recovery_area/ORCL/archivelog/2019_09_11/o1_mf_1_10_gqkcr27d_.arc thread=1 sequence=10
media recovery complete, elapsed time: 00:00:00
Finished recover at 11-SEP-19

RMAN>

6- Open database in Resetlogs mode

RMAN> alter database open resetlogs;

Statement processed

RMAN>
PDBs Complete Recovery

To recover a PDB we can :
– Connect to the root and then use the RESTORE PLUGGABLE DATABASE and RECOVER PLUGGABLE DATABASE commands.
1- Close the PDB to recover

SQL> alter pluggable database pdb1 close;

Pluggable database altered.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           MOUNTED
         4 PDB2                           READ WRITE NO
SQL>

2- Connect to rman on the root container and issue the restore command

RMAN> RESTORE PLUGGABLE DATABASE  PDB1;

Starting restore at 11-SEP-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=56 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00009 to /u01/app/oracle/oradata/ORCL/pdb1/system01.dbf
channel ORA_DISK_1: restoring datafile 00010 to /u01/app/oracle/oradata/ORCL/pdb1/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00011 to /u01/app/oracle/oradata/ORCL/pdb1/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00012 to /u01/app/oracle/oradata/ORCL/pdb1/users01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORCL/92359B0BEC8B4545E0531502A8C0F64E/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T144816_gqkv9btm_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359B0BEC8B4545E0531502A8C0F64E/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T144816_gqkv9btm_.bkp tag=TAG20190911T144816
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 11-SEP-19

3- Recover the pluggable database

RMAN> RECOVER PLUGGABLE DATABASE  PDB1;

Starting recover at 11-SEP-19
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 11-SEP-19

4- Open the pluggable database

RMAN> alter pluggable database PDB1 open;

Statement processed

RMAN>

– Connect to the PDB and use the RESTORE DATABASE and RECOVER DATABASE commands.
1- Close the PDB to recover

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB2                           MOUNTED
SQL>

2- Connect to the PDB and issue the RESTORE DATABASE command

[oracle@oraadserver pdb1]$ rman target sys/root@pdb2

Recovery Manager: Release 19.0.0.0.0 - Production on Wed Sep 11 15:19:03 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL:PDB2 (DBID=3996013191, not open)

RMAN> RESTORE DATABASE;

Starting restore at 11-SEP-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=45 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00013 to /u01/app/oracle/oradata/ORCL/pdb2/system01.dbf
channel ORA_DISK_1: restoring datafile 00014 to /u01/app/oracle/oradata/ORCL/pdb2/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00015 to /u01/app/oracle/oradata/ORCL/pdb2/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00016 to /u01/app/oracle/oradata/ORCL/pdb2/users01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T144816_gqkv9tfq_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T144816_gqkv9tfq_.bkp tag=TAG20190911T144816
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 11-SEP-19

RMAN>

3- Recover the pluggable database

RMAN> recover database;

Starting recover at 11-SEP-19
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 11-SEP-19

4- Open the database

RMAN> alter database open ;

Statement processed

RMAN>
Complete Tablespace Recovery in a PDB

-Non-SYSTEM Tablespace

To recover a Non-SYSTEM Tablespace in a PDB we can do next steps
1-Put the tablespace offline while connecting to the PDB

SQL> ALTER TABLESPACE MYTAB OFFLINE;

Tablespace altered.

SQL>

2- Connect to the PDB with RMAN and issue the RESTORE TABLESPACE command

[oracle@oraadserver pdb2]$ rman target sys/root@pdb2

Recovery Manager: Release 19.0.0.0.0 - Production on Wed Sep 11 16:52:37 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL:PDB2 (DBID=3996013191)

RMAN> RESTORE TABLESPACE MYTAB;

Starting restore at 11-SEP-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=274 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00017 to /u01/app/oracle/oradata/ORCL/pdb2/mytab01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T163708_gql1o4gx_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T163708_gql1o4gx_.bkp tag=TAG20190911T163708
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 11-SEP-19

3- Issue the RECOVER TABLESPACE command

RMAN> RECOVER TABLESPACE MYTAB;

Starting recover at 11-SEP-19
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 11-SEP-19

RMAN>

4- Put back the tablespace ONLINE

RMAN> ALTER TABLESPACE MYTAB ONLINE;

Statement processed

RMAN>

-SYSTEM Tablespace

To recover a SYSTEM Tablespace in a PDB

1- Shutdown the entire CDB and Mount it

[oracle@oraadserver pdb2]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Sep 11 17:09:33 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> shut immediate
ORA-01116: error in opening database file 13
ORA-01110: data file 13: '/u01/app/oracle/oradata/ORCL/pdb2/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
SQL> shut abort;
ORACLE instance shut down.
SQL>

SQL> startup mount
ORACLE instance started.

Total System Global Area  872413680 bytes
Fixed Size                  9140720 bytes
Variable Size             310378496 bytes
Database Buffers          545259520 bytes
Redo Buffers                7634944 bytes
Database mounted.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       MOUNTED
         3 PDB1                           MOUNTED
         4 PDB2                           MOUNTED
SQL>

2- Connect to root container and restore the corresponding files (Files can be identified using command REPORT SCHEMA for example)

RMAN> RESTORE DATAFILE 13;

Starting restore at 11-SEP-19
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=26 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00013 to /u01/app/oracle/oradata/ORCL/pdb2/system01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T163708_gql1o4gx_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T163708_gql1o4gx_.bkp tag=TAG20190911T163708
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
Finished restore at 11-SEP-19

RMAN>

3- Recover the Data File

RMAN> RECOVER DATAFILE 13;

Starting recover at 11-SEP-19
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 11-SEP-19

4- Open all containers

RMAN> alter database open;

Statement processed

RMAN> ALTER PLUGGABLE DATABASE ALL OPEN READ WRITE;

Statement processed

RMAN>

Conclusion
In this blog we basically explain how to use RMAN in a multitenant environment. We did not talked about PITR recovery, we will do it in a coming blog.
Note also that we did not use RMAN commands like LIST FAILURE, ADVISE FAILURE and REPAIR FAILURE. But these commands also work.

Cet article RMAN in a Multitenant Environment est apparu en premier sur Blog dbi services.

HVR | Real-Time CDC ::Oracle Autonomous DW::

Rittman Mead Consulting - Fri, 2019-09-13 09:49
Introduction :Oracle Autonomous DW::

High quality Business Intelligence is key in decision making for any successful organisation and for an increasing number of businesses this means being able to access real-time data.  At Rittman Mead we are seeing a big upturn in interest in technologies that will help our customers to integrate real-time data into their BI systems.  In a world where streaming is revolutionising the way we consume data, log-based Change Data Capture (CDC) comes into the picture as a way of providing access to real-time transactional data for reporting and analysis. Most databases support CDC nowadays; in Oracle for example the redo logs are a potential source of CDC data. Integration tools that support CDC are coming to the fore and one of these is HVR.

HVR is a real-time data integration solution that replicates data to and from many different data management technologies, both on-premise and cloud-based, including Oracle, SAP and Amazon. The HVR GUI makes it easy to connect together multiple databases, replicate entire schemas, initialise the source and target systems and then keep them in sync.

HVR have recently added Oracle’s Autonomous Data Warehouse (ADW) to their list of supported technologies and so in this blog we are stepping through the process of configuring an HVR Channel to replicate data from an Oracle database to an instance of Oracle ADW.


Setup

Before setting up replication you have to install HVR itself. This is simple enough, a fairly manual CLI job with a couple of files to create and save in the correct directories. Firewalls also needs to allow all HVR connections. HVR needs a database schema in which to store the repository configuration data and so we created a schema in the source Oracle database. It also needs some additional access rights on the Oracle source database.


Process 1.

The first step is to register the newly created Hub in the HVR GUI. The GUI can be run on any machine that is able to connect to the server on which the HVR hub is installed. We tested two GUI instances, one  running on a Windows machine and one on a MAC. Both were easy to install and configure.

:Oracle Autonomous DW::

The database connection details entered here are for the HVR hub database, where metadata about the hub configuration is stored.

2.

Next we need to define our source and target. In both cases the connection between the HVR and the data uses standard Oracle database connectivity. The source connection is to a database on the same server as the HVR hub and the target connection uses a TNS connection pointing at the remote ADW instance.

Defining the source database involves right clicking on Location Configuration and selecting New Location:

:Oracle Autonomous DW::

Configuring the target involves the same steps:

:Oracle Autonomous DW::

You can see from the screenshot that we are using one of the Oracle-supplied tnsnames entries to connect to ADW and also that we are using a separate Oracle client install to connect to ADW. Some actions within HVR use the Oracle Call Interface and require a more recent version of the Oracle client than provided by our 12c database server install.

Next up is creating the “channel”. A channel channel in HVR groups together the source and target locations and allows the relationship between the two to be defined and maintained.  Configuring a new channel involves naming it, defining source and target locations and then identifying the tables in the source that contain the data to be replicated.

3.

The channel name is defined by right clicking on Channel Definitions and selecting New Channel.

:Oracle Autonomous DW::

We then open the new channel and right click on Location Groups and select New Group to configure the group to contain source locations:

:Oracle Autonomous DW::

The source location is the location we defined in step 2 above. We then right click on the newly created group and select New Action, Capture  to define the role of the group in the channel:

:Oracle Autonomous DW::

The Capture action defines that data will be read from the locations in this group.

A second Location Group is needed for the for the target. This time we defined the target group to have the Integrate action so that data will be written to the locations in this group.

4.

The final step in defining the channel is to identify the tables we want to replicate. This can be done using the Table Explore menu option when you right-click on Tables.:

:Oracle Autonomous DW:: 5.

With the channel defined we can start synchronising the data between the two systems. We are starting with an empty database schema in our ADW target so we use the HVR Refresh action to first create the target tables in ADW and to populate them with the current contents of the source tables.  As the Refresh action proceeds we can monitor progress:

:Oracle Autonomous DW:: 6.

Now with the two systems in sync we can start the process of real-time data integration using the HVR Initialise action. This creates two new jobs in  the HVR Scheduler which then need to be started:

:Oracle Autonomous DW::

One more thing to do of course: test that the channel is working and replications is happening in real-time. We applied a series of inserts, updates and deletes to the source system and monitored the log files for the two scheduled jobs to see the activity captured from the redo logs on the source:

:Oracle Autonomous DW::

and then applied as new transactions on the target:

:Oracle Autonomous DW::

The HVR Compare action allows us to confirm that the source and target are still in sync.

:Oracle Autonomous DW::
Conclusion

Clearly the scenario we are testing here is a simple one. HVR can do much more - supporting one-to-many, many-to-many and also bi-directional replication configurations. Nonetheless we were impressed with how easy it was to install and configure HVR and also with the simplicity of executing actions and monitoring the channel through the GUI. We dipped in to using the command line interface when executing some of the longer running jobs and this was straightforward too.

Categories: BI & Warehousing

Presenting at UKOUG Techfest19 Conference in Brighton, UK

Richard Foote - Thu, 2019-09-12 19:07
I’m very excited to be attending my 3rd UKOUG Conference, this year re-badged as Techfest19. The fact it’s being held in Brighton is a little disconcerting for a Crystal Palace fan, but really looking forward nonetheless to what has always been one of the very best Oracle conferences on the yearly calendar. I have a […]
Categories: DBA Blogs

Oracle Cloud: First login

Dietrich Schroff - Thu, 2019-09-12 13:53
After signing up to Oracle cloud i tried my first login:

https://cloud.oracle.com/en_US/sign-in


but i only got:
I think the problem is, that there i a manual review step on Oracle's side which i have not passed for now:
So let's wait for a day or two...

Estimating how much write I/O is not logged

Bobby Durrett's DBA Blog - Thu, 2019-09-12 11:30

I am trying to figure out how much non-logged write I/O an Oracle database is doing. I want to run an ALTER DATABASE FORCE LOGGING command on the database so that I can use Oracle GoldenGate(GGS) which reads updates from Oracle’s logs. GGS will miss writes that are not logged. But if I turn on force logging it may slow down applications that depend on non-logged writes for good performance. So, I want to find some Oracle performance metrics that give me an idea about how much non-logged write I/O we have so I have an estimate of how much force logging will degrade performance.

I created SQL*Plus and PythonDBAGraphs reports based on DBA_HIST_IOSTAT_FUNCTION that gives some insight into the write I/O that is not logged. Here is the Python based graphical version of the report for one recent weekend:

Possible NOLOGGING Write I/O

The purple-blue line represents Direct Writes. These may or may not be logged. The red-orange line represents writes through the DBWR process. These are always logged. The light green line represents log I/O through the LGWR process. My theory is that if the purple line is above the green line the difference must be write I/O that is not logged. But if the green line is equal to or greater than the purple line you really do not know if there was any write I/O that was not logged. But if there is non-logged write I/O it cannot be more than the amount indicated by the purple line. So, this graph does not directly answer my question about how much write I/O was not logged but it does show some numbers that relate to the question.

I did some experiments with the V$IOSTAT_FUNCTION view that populates DBA_HIST_IOSTAT_FUNCTION to see what values it gives for Direct Writes, DBWR, and LGWR using different scenarios. Here is the zip of these scripts and their output: nologgingscriptsandlogs09122018.zip. I tested four scenarios:

  1. Insert append nologging
  2. Insert append logging
  3. Insert noappend logging
  4. Insert noappend nologging

1 and 2 did Direct Writes. 3 and 4 did DBWR writes. 2, 3, and 4 all did LGWR writes.

Here are the relevant sections of the output that correspond to these statements.

Insert append nologging:

FUNCTION_NAME      WRITE_DIFF_MEGABYTES
------------------ --------------------
Direct Writes                      4660
LGWR                                 46
DBWR                                 27

Insert append logging:

FUNCTION_NAME      WRITE_DIFF_MEGABYTES
------------------ --------------------
LGWR                               4789
Direct Writes                      4661
DBWR                                 37

Insert noappend logging:

FUNCTION_NAME      WRITE_DIFF_MEGABYTES
------------------ --------------------
DBWR                               6192
LGWR                               4528
Direct Writes                         2

Insert noappend nologging:

FUNCTION_NAME      WRITE_DIFF_MEGABYTES
------------------ --------------------
DBWR                               6213
LGWR                               4524
Direct Writes                         2

This pattern is similar to that in a Ask Tom post that I wrote about a while back. That post showed the different situations in which writes were logged or not. I also got some ideas about direct writes and logging from this Oracle support document:

Check For Logging / Nologging On DB Object(s) (Doc ID 269274.1)

It sounds like inserts into tables that go through the normal processing eventually get written to disk by DBWR but inserts with the append hint write directly to the datafiles and may or may not be logged and written out by LGWR.

These tests and documents gave me the idea of building a report and graph based on DBA_HIST_IOSTAT_FUNCTION showing the values for the Direct Writes, DBWR, and LGWR FUNCTION_NAME values. The graph above shows an example of a real system. I was surprised to see how high the DBWR and LGWR values were and how low the Direct Writes were. That made me think that it would be safe to try turning on FORCE LOGGING because it likely will have minimal impact on the overall weekend processing. It gave me enough evidence to push for approval to do a controlled test of enabling FORCE LOGGING in production over an upcoming weekend. I will update this post with the results if we move forward with the test.

Bobby

Categories: DBA Blogs

Announcement: Australia/NZ “Let’s Talk Database” Events October 2019 !!

Richard Foote - Wed, 2019-09-11 22:49
I’ve very excited to announce the next series of Oracle “Let’s Talk Database” events to be run throughout Australia and New Zealand in October 2019. I’ll be discussing two exciting topics this series, “Oracle Database 19c New Features” and “Oracle Exadata X8“. As always, these sessions run between 9am-1pm, include a networking lunch and are free, […]
Categories: DBA Blogs

Oracle Database 19c Automatic Indexing: Default Index Column Order Part II (Future Legend)

Richard Foote - Tue, 2019-09-10 20:59
In Part I, we explored some options that Oracle might adopt when ordering the columns within an Automatic Index by default, in the absence of other factors where there is only the one SQL statement to be concerned with. A point worth making is that if all columns of an index are specified within SQL […]
Categories: DBA Blogs

Jami (Gnu Ring) review

RDBMS Insight - Tue, 2019-09-10 15:35

An unavoidable fact of database support life is webconferences with clients or users. Most of the time, we’re more interested in what’s going on onscreen than in each others’ faces. But every now and then we need to have a face-to-face. Skype is popular, but I recently had the chance to try out a FOSS alternative with better security: Jami.

Jami (formerly Gnu Ring) is a FOSS alternative to Skype that advertises a great featureset and some terrific privacy features. I suggested to a small group that we try it out for an upcoming conference call.

Just going by its specs, Jami (https://jami.net/) looks amazing. It’s free, open-source software that’s available on all the major platforms, including all the major Linux distros. It boasts the following advantages over Skype and many other Skype alternatives:

  • Distributed: Uniquely, there aren’t any central servers. Jami uses distributed hash table technology to distribute directory functions, authentication, and encryption across all devices connected to it.
  • Secure: All communications are end-to-end encrypted.
  • FOSS: Jami’s licensed under a GPLv3+ license, is a GNU package and a project of the Free Software Foundation.
  • Ad-free: If you’re not paying for commercial software, then you are the product. Not so with Jami, which is non-commercial and ad-free. Jami is developed and maintained by Savoir Faire Linux, a Canadian open-source consulting company.

And its listed features include pretty much everything you’d use Skype for: text messaging, voice calls, video calls, file and photo sharing, even video conference calls.

I wanted to use it for a video conference call, and my group was willing to give it a try. I had high hopes for this FOSS Skype alternative.

Installation

Jami is available for: Windows, Linux, OS X, iOS, Android, and Android TV. (Not all clients support all features; there’s a chart in the wiki.) I tried the OS X and iOS variants.

First, I installed Jami on OS X and set it up. The setup was straightforward, although I had to restart Jami after setting up my account, in order for it to find that account.

Adding contacts

One particularly cool feature of Jami is that your contact profile is stored locally, not centrally. Your profile’s unique identifier is a cumbersomely long 40-digit hexadecimal string, such as “7a639b090e1ab9b9b54df02af076a23807da7299” (not an actual Jami account afaik). According to the documentation, you can also register a username for your account, such as “natalkaroshak”.

Contacts are listed as hex strings.Unfortunately, I wasn’t able to actually find any of my group using their registered usernames, nor were they able to find me under my username. We had to send each other 40-digit hex strings, and search for the exact hex strings in Jami, in order to find each other.

The only way to add a contact, once you’ve located them, is to interact with them, eg. by sending a text or making a call. This was mildly annoying when trying to set up my contact list a day ahead of the conference call.

Once I’d added the contacts, some of them showed up in my contact list with their profile names… and some of them didn’t, leaving me guessing which hex string corresponded to which member of my group.

Sending messages, texts, emojis

Sending and receiving Skype-style messages and emojis worked very well in Jami. Group chat isn’t available.

Making and taking calls

The documented process for a conference call in Jami is pretty simple: call one person,

Only the Linux and Windows versions currently support making conference calls. Another member of our group tried to make the conference call. As soon as I answered his incoming call, my Jami client crashed. So I wasn’t able to actually receive a call using Jami for OS X.

The caller and one participant were able to hear each other briefly, before the caller’s Jami crashed as well.

Linking another device to the same account

I then tried installing Jami on my iPhone. Again, the installation went smoothly, and this let me try another very cool feature of Jami.

In Jami, your account information is all stored in a folder on your device. There’s no central storage. Password creation is optional, because you don’t log in to any server when you join Jami. If you do create a password, you can (1) register a username with the account and (2) use the same account on another device.

The process of linking my iPhone’s Jami to the same account I used with my OSX Jami was very smooth. In the OSX install, I generated an alphanumeric PIN, entered the PIN into my device, and entered the account password. I may have mis-entered the first alphanumeric PIN, because it worked on the second try.

Unfortunately, my contacts from the OSX install didn’t appear in the iOS install, even though they were linked to the same account. I had to re-enter the 40-digit hex strings and send a message to each conference call participant.

Making calls on iOS

The iOS client doesn’t support group calling, but I tried video calling one person. We successfully connected. However, that’s where the success ended. I could see the person I called, but was unable to hear her. And she couldn’t see OR hear me. After a few minutes, the video of the other party froze up too.

Conclusion

Jami looked very promising, but didn’t actually work.

All of the non-call stuff worked: installation, account creation, adding contacts (though having to use the 40-digit hex codes is a big drawback), linking my account to another device.

But no one in my group was able to successfully make a video call that lasted longer than a few seconds. The best result was that two people could hear each other for a couple of seconds.

Jami currently has 4.5/5 stars on alternativeto.net. I have to speculate that most of the reviews are from Linux users, and that the technology is more mature on Linux. For OSX and iOS, Jami’s not a usable alternative to Skype yet.

Big thanks to my writing group for gamely trying Jami with me!

Categories: DBA Blogs

Oracle Cloud: Sign up failed... [3] & solved

Dietrich Schroff - Tue, 2019-09-10 13:44
Finally (see my attempts here and here) i was able to sign up to Oracle cloud.
What did the trick?

I got help from Oracle support:
So i used my gmail address and this worked:

and then:

Let's see how this cloud will work compared to Azure and AWS

Red Hat Forum Zürich, 2019, some impressions

Yann Neuhaus - Tue, 2019-09-10 07:34

Currently the Red Hat Forum 2019 in Zürich is ongoing and people just finished lunch before the more technical sessions are starting.

As expected a lot is around OpenShift 4 and automation with Ansible. As dbi is a Red Hat advanced business partner we took the opportunity to be present with a booth for getting in touch with our existing customers and to meet new people:

All the partners got their logo on a huge wall at the entrance to the event:

As the event is getting more and more popular, Red Hat moved to the great and huge location, the Stage One in Zürich Oerlikon. So all of the 850 participants found their space.

There is even space for some fun stuff:

Important as well: the catering was excellent:

The merger with IBM was an important topic and Red Hat again stated several times: Red Hat will stay Red Hat. Let’s see what happens here, not all people trust this statement. All in all it is a great atmosphere here in Oerlikon, great people to discuss with, interesting topics, great organization and a lot of “hybrid cloud”. Technology is moving fast and Red Hat is trying to stay at the front. From a partner perspective the Forum is a great chance to meet the right people within Red Hat, no matter what topic you want to discuss: Technology, marketing, training, whatever. I am pretty sure we will attend the next forum as well.

Cet article Red Hat Forum Zürich, 2019, some impressions est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator