Feed aggregator

Why Oracle Cloud @ Customer is a good option?

Syed Jaffar - Thu, 2019-05-30 03:53
One of the major concerns moving over cloud is the security for most of the organizations. Though cloud concept is around for quite sometime, still, a majority of customers are concerned about putting their data over cloud. To gain the confidence and at the same to take full advantage of Cloud technologies, various Cloud vendors started offering cloud at customer solutions. In this blog spot, am going to discuss about Oracle cloud at customer solutions, its advantages , subscription model etc.

Oracle Cloud at Customer delivers full advantages of cloud technologies at your data center. You subscribe hardware and software together when you go for cloud at customer option. Though Oracle does the initial setup, configuration and day-to-day system management, you still have all the advantages of security, network of your data center.

Typically, the cloud at customer option consist of the following:

  • The hardware required to run Cloud at customer
  • Control panel software
  • The Oracle Advanced Support Gateway
  • Oracle Cloud services
 As a customer, your responsibility involves managing cloud account and subscribed services. At any time, you can check your account balance and your current Oracle Cloud at Customer service usage. It is also possible that you can view your usage by region, by service, or by a specific time period.
To check your account balance and usage, Oracle recommends that you sign in to your Oracle Cloud Account in an Oracle data region. From there, you can view your overall account usage and Universal Credits balance. 

In nutshell, cloud at customer brings the cloud solutions to your data center, where you can apply all the rules of your data centre while taking the full advantages of cloud solutions.

Subscribe to business events in Fusion-based SaaS applications from Oracle Integration Cloud ...

For integration with all Oracle Fusion based Cloud services - like Oracle Sales Cloud and Oracle ERP Cloud - each service provides Business Events which external applications or integrations can...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Build and Deploy a Helidon Microservice Using Oracle Developer Cloud

OTN TechBlog - Wed, 2019-05-29 19:53

Project Helidon was recently introduced by Oracle. It provides a new way to write microservices. This blog will help you understand how use Oracle Developer Cloud to build and deploy your first Helidon-based microservice on Oracle Container Engine for Kubernetes.

Before we begin, let’s examine a few things:

What is Helidon?

Project Helidon is a set of Java Libraries for writing microservices.  Helidon supports two programming models: Helidon MP, based on MicroProfile 1.2, and Helidon SE, a small, functional style API.

Regardless of which model you choose, you’ll be writing an application that is a Java SE-based program. Helidon is open source and the code is available on GitHub.  To read and learn more about Project Helidon, see the following links:

Get Started

Helidon doesn’t have downloads. Instead, you’ll need to use the Maven releases. This means that you’ll be using the Maven Archetype to get started with your Helidon microservice project. In this blog, we’ll be using the Helidon SE programming model.

The following basic prerequisites should be installed on your machine to develop with Helidon:

  • Maven
  • Java 8
  • Gitcli (for pushing code to the Oracle Developer Cloud Git repository)

Download the Sample Microservice Project for Helidon with Maven

Open the command prompt, if you’re using a Windows machine. Then go to (or create) the directory or folder where you’d like to create the sample Helidon microservice project and execute the following Maven command.

mvn archetype:generate -DinteractiveMode=false \

    -DarchetypeGroupId=io.helidon.archetypes \

    -DarchetypeArtifactId=helidon-quickstart-se \

    -DarchetypeVersion=1.1.0 \

    -DgroupId=io.helidon.examples \

    -DartifactId=helidon-quickstart-se \



When executed, this Maven command will create the helidon-quickstart-se folder.

The microservice application code, the build files, and the deployment files all reside in the helidon-quickstart-se folder.


These are the files and folder(s):

  • src folder –  Contains the microservice application source code
  • app.yml – Describes the Kubernetes deployment
  • Dockerfile – Provides instructions for building the Docker image
  • Dockerfile.native – Provides instructions for building the Docker image using the Graal VM
  • Pom.xml – Project descriptor for the Maven build
  • README.md –  File that contains a description of the project


Now let’s create an Oracle Developer Cloud project with a Git repository. We’ll call the Git repository Helidon.git.


Navigate to the helidon-quickstart-se folder in your command prompt window and execute the following Git commands to push the Helidon microservice application code to the Git repository you created.

Note: You need to have gitcli installed on your development machine to execute Git commands.

git init

git add --all

git commit -m "First commit"

git remote add origin <git repository url>

git push origin master


Your Helidon.git repository should have the structure shown below.


Configure the Build Job

In Developer Cloud, select Builds in the left navigation bar to display the Builds page. Then click the +Create Job button.  

In the New Job dialog, enter BuildHelidon for the Name and select a Template that has the Docker runtime. Then click the Create button. This build job will build Docker image for the Helidon Microservice code in the Git repository and push it to the DockerHub registry.

In the Git tab, select Git from the Add Git dropdown, select Helidon.git as the Git repository and, for the branch, select master.


In the Steps tab, use the Add Step dropdown to add Docker login, Docker build, and Docker push steps.

In the Docker login step, provide your DockerHub Username and Password. Leave the Registry Host empty, since we’re using DockerHub as the registry.

In the Docker build step, enter <DockerHub Username>/helidonmicro for the Image Name and 1,0 for the Version Tag. The full image name shown is <DockerHub Username>/helidonmicro:1.0


In the Docker push step, enter <DockerHub Username>/helidonmicro for the Image Name and 1.0 for the Version Tag. Then click the Save button.

Before we create the build job that will deploy the Helidon Microservice Docker container, you need to edit the app.yaml file and modify the Docker image name. To edit that file, go to the Git page, select the Helidon.git repository, and click the app.yml file link.


Click the pencil icon to edit the file.


Replace the image name with <your DockerHub username>/helidonmicro:1.0, then click the Commit button to commit the code changes to the master branch.


To create another build job, navigate to the Builds page and click the +Create Job button. 

In the New Job dialog enter DeployHelidon for the Name, select the template with Kubectl, then click the Create button. This build job will deploy the Docker image built by the BuildHelidon build job to the Kubernetes cluster.


The first thing you’ll do to configure the DeployHelidon build job is to specify the repository where the code is found and select the branch where you’ll be working on the files.  To do this, in the Git tab, add Git from the dropdown, select Helidon.git as the Git repository and, for the branch, select master.

In the Steps tab, select OCIcli and Unix Shell from the Add Step drop down. Take a look at this blog link to see how and where to get the values for the OCIcli configuration. Then, in the Unix Shell build step, enter the following script. You can get the Kubernetes Cluster Id from the Oracle Cloud Infrastructure console. 

mkdir -p $HOME/.kube oci ce cluster create-kubeconfig --cluster-id --file $HOME/.kube/config --region us-ashburn-1 export KUBECONFIG=$HOME/.kube/config kubectl create -f app.yaml sleep 30 kubectl get services helidon-quickstart-se kubectl get pods kubectl describe pods

When you’re done, click the Save button.


Create the Build Pipeline

Navigate to the Pipelines tab in the Builds page. Then click the +Create Pipeline button.


In the Create Pipeline dialog, you can enter the Name as HelidonPipeline. Then click the Create button.

Drag and drop the BuildHelidon and DeployHelidon build jobs and then connect them.


Double click the link that connects the build jobs and select Successful as the Result Condition. Then click the Apply button.


Click the Build button, as shown, to run the build pipeline. The BuildHelidon build job will be executed first and, if it is successful, then the DeployHelidon build job that deploys the container on the Kubernetes cluster on Oracle Cloud will be executed next.


After the jobs in the build pipeline finish executing, navigate to the Jobs tab and click the link for the DeployHelidon build job.  Then click the log icon for the executed build. You should see messages that the service and deployment were successfully created.  Now, for the helidon-quickstart-se service and deployment that were created on the Kubernetes cluster, search the log, and find the public IP address and port to access the microservice, as shown below.



Enter the IP address and port that you retrieved from the log, into the browser using the format shown in the following URL:

http://<retrieved IP address>:<retrieved port>/greet

You should see the “Hello World!” message in your browser.



So, you’ve seen how Oracle Developer Cloud can help you manage the complete DevOps lifecycle for your Helidon-based microservices and how out-of-the-box support for Build and Deploy to Oracle Container Engine for Kubernetes makes it easier.

To learn more about other new features in Oracle Developer Cloud, take a look at the What's New in Oracle Developer Cloud Service document and explore the links it provides to our product documentation. If you have any questions, you can reach us on the Developer Cloud Slack channel or in the online forum.

Happy Coding!

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Timestamp Oddity

Jonathan Lewis - Wed, 2019-05-29 12:17

[Editorial note: this is something I started writing in 2013, managed to complete in 2017, and still failed to publish. It should have been a follow-on to another posting on the oddities of timestamp manipulation.]

Just as national language support used to be, timestamps and time-related columns are still a bit of a puzzle to the Oracle world – so much so that OEM could cripple a system if it was allowed to do the check for “failed logins over the last 30 minutes”. And, just like NLS, it’s one of those things that you use so rarely that you keep forgetting what went wrong the last time you used it. Here’s one little oddity that I reminded myself about recently:

rem     Script:         timestamp_anomaly.sql
rem     Author:         Jonathan Lewis
rem     Dated:          April 2013
create table t1 (
        ts_tz   timestamp(9) with time zone,
        ts_ltz  timestamp(9) with local time zone

insert into t1 values(systimestamp, systimestamp);

alter table t1 add constraint ts_ltz_uk unique (ts_ltz);
alter table t1 add constraint ts_tz_uk  unique (ts_tz);

Nothing terribly difficult – just a table with two variants on the timestamp data type and a unique constraint on both: except for one problem. Watch what happens as I create the unique constraints:

SQL> alter table t1 add constraint ts_ltz_uk unique (ts_ltz);

Table altered.

SQL> alter table t1 add constraint ts_tz_uk  unique (ts_tz);
alter table t1 add constraint ts_tz_uk  unique (ts_tz)
ERROR at line 1:
ORA-02329: column of datatype TIME/TIMESTAMP WITH TIME ZONE cannot be unique or a primary key

Slightly unexpected – unless you’ve memorized the manuals, of course, which I hadn’t. I wonder if you can create a unique index on timestamp with time zone:

SQL> create unique index ts_tz_uk on t1(ts_tz);

Index created.

You can’t have a unique constraint, but you CAN create a unique index! How curious – did that really happen ?

SQL> select index_name, column_name from user_ind_columns where table_name = 'T1';

-------------------- --------------------
TS_LTZ_UK            TS_LTZ
TS_TZ_UK             SYS_NC00003$

The index is on a column called SYS_NC00003$ – which looks suspiciously like one of those “function-based-index” things:

SQL> select * from user_ind_expressions where table_name = 'T1';

INDEX_NAME           TABLE_NAME           COLUMN_EXPRESSION                        COLUMN_POSITION
-------------------- -------------------- ---------------------------------------- ---------------
TS_TZ_UK             T1                   SYS_EXTRACT_UTC("TS_TZ")                               1

Oracle has silently invoked the sys_extract_utc() function on our (free-floating) timestamp column to normalize it to UTC. This is really not very friendly but it does make sense, of course – it would be rather expensive to enforce uniqueness if there were (at least) 24 different ways of storing the same absolute value – and 24 is a conservative estimate.



Securing the Oracle Cloud

Oracle Security Team - Wed, 2019-05-29 09:01

Greetings from sunny Seattle! My name is Eran Feigenbaum and I am the Chief Information Security Officer for the Oracle Cloud. Oracle Cloud Infrastructure (OCI) is what we call a Gen2 cloud, a fundamentally re-designed public cloud, architected for superior customer isolation and enterprise application performance than the cloud designs of ten years past. OCI is the platform for Autonomous Data Warehouse and Autonomous Transaction Processing  and, in short order, for all Oracle applications  (see Oracle CEO Mark Hurd on moving NetSuite to the Oracle Cloud),.  This is my inaugural post on our relaunched corporate security blog (thank you Mary Ann) and I’m thrilled to begin a substantive discussion with you about public cloud security. But first things first, with this blog I will describe how my group is organized and functions to protect the infrastructure for the literally thousands of applications and services moving to and continuously being developed on Oracle OCI.

My journey to Oracle was paved on over two decades-worth of experience in security. I was lucky to experience the cloud evolution from all sides in my various roles as pen tester, architect, cloud provider and cloud customer. Certainly, the core set of learnings came from nearly a decade of leading security for what is now Google Cloud. This was during a time when cloud business models were very much in their infancy, as were the protection mechanisms for customer isolation. Later, I would understand the challenges differently as the CISO of an e-commerce venture. Jet.com was a cloud-native business, so while we had no physical data centers, I understood well the limitations of first-generation cloud designs in dealing with cloud-borne threats and data protection requirements. So, when it came to joining OCI, the decision was an easy one. In its Gen2 offering, I saw that Oracle was building the future of enterprise cloud; a place where “enterprise-grade” had meaningful payoff in architecture choices like isolated network virtualization to control threat proliferation and as importantly, DevSecOps was foundational to OCI, not a transformation challenge. What security leader would not want to be a part of that?

OCI distinguishes itself among cloud providers for having predictable performance and a security-first design, so most of our customers are organizations with high sensitivity to data and information protection. They are building high performance computing applications, and that includes our Oracle internal customers, so security must be continuous, ubiquitous, agile and above all scalable. By extension then, the OCI Security Group is in many ways the modern Security Operations Center (SOC). Our job is to enable the continuous integration and continuous deployment (CI/CD) pipeline.

In building the team, I aimed at three main goals: 1) build a complete organization that could address not only detection and response but proactively ensure the security of services developed and deployed on OCI, 2) create a culture and operating practice of frequent communication and metrics sharing among teams to ensure continuous goal evaluation and 3) align with the practices that Oracle’s corporate security teams had set and refined over four decades of protecting customers’ most sensitive data.

To that end the Chief Security Office at Oracle Cloud Infrastructure (OCI) consists of six (6) teams. Between these six (6) teams, the OCI Security Group provides a comprehensive and proactive set of security services, technologies, guidance, and processes that ensure a good security posture and address security risks.

  • Security Assurance: Works collaboratively with the security teams and stakeholders throughout Oracle to drive the development and deployment of security controls, technologies, processes, and guidance for those building on OCI.
  • Product Security: This team really examines and evolves the OCI architecture, both hardware and software/services, to ensure we are taking advantage of innovations and making those changes that enhance our security posture.
  • Offensive Security: The work of this team is really to understand and emulate the methods of bad actors. Some of the work involves research, penetration testing and simulating advanced threats, against our hardware and software. All work is about strengthening our architecture and defensive capability.
  • Defensive Security: These are really the first responders of cloud security. They work proactively to spot weaknesses and in the event of incidents, work to remediate them within the shortest possible window.
  • Security Automation Services: We know that automation is fundamental to scaling but it is also key to shortening detection and response time. The team aggregates and correlates information about risks and methods to develop visualizations and tools that expedite risk reduction.
  • Security Go-To-Market: One of the most common requests of me is to share information on our security architecture, methods, tooling and best practices. Our internal and external customers want reference architectures and information on how to benefit from our experience. Having this function as part of the group gives the team access to ground truth and aligns with a core value to “put customers first”.

While the team organization is set up for completeness of function in service to the CI/CD pipeline, the key to achieving continuous security and security improvement is how well all members operate as a unit. I think of each team as being essential to the others. Each area generates intelligence that informs the other units and propels them in a kind of virtuous cycle with security automation enabling accelerated revolutions through this cycle.

 Functionally interdependent and mission aligned

As Oracle engineers, for instance, plan for the re-homing or development of new applications and services on OCI, our security architecture works with them. Throughout the drawing board and design phases, we advise on best practices, compliance considerations, tooling and what the process for continuous security will look like during the integration and deployment phases. Security assurance personnel, experts in code review best practices, give guidance and create awareness about the benefits of a security mindset for code development. At time of implementation and execution, the offensive security team conducts tests looking for weaknesses and vulnerabilities which will be surfaced both to the development teams as well as to our defensive security teams for both near term and long-term strategic remediation. This process is continuous as changes and updates can quickly alter the security posture of an environment or an application, so our aim is rapid response and most importantly refining practices and processes that will reduce the risk from those same vulnerabilities for the long term. This latter includes continuous security awareness training so that a security mindset is the cultural norm even as we scale and grow at a rapid pace.

Agility and scale in security are an imperative for a cloud operator, especially one at Oracle’s size and scope which attracts the most security sensitive businesses, governments and organizations. Our approach to security automation applies to nearly every activity and process of OCI security. We observe that which can be replicated and actioned either without human intervention or through self service mechanisms. Automation provides innovations and tooling that help not only our OCI security group but internal security stakeholders and even customers. Through visibility and self-service mechanisms, we make developers and service owners part of the OCI security mission and consequently improve our ability to maintain consistent security.

I mentioned at the beginning of this post that key to security effectiveness is not only an organizational structure built for the modern cloud but also security functional areas that are interdependent and in constant communication. One of the best ways that I have found to do this in my career managing large teams is through the Objective and Key Results (OKR) process. Similar, to Key Performance Indicators (KPIs), OKRs enable measurement of success or failure, but unlike KPIs, Objectives and Key Results (OKRs) encourage leaders, teams and contributors to make big bets, stretch beyond what seems merely achievable toward what can be revolutionary. In his seminal book Measure What Matters (of which I talk about to anyone who will listen), John Doerr outlines the structure by which agile enterprises stay aligned to mission even as they adjust to account for changes in business conditions. The key results will confirm if the direction is correct or needs adjusting. The teams of the OCI Security group stay aligned and informed by one another through the OKR system. The focus on cross communication, deduplication and realignment give us visibility to the incremental improvements and successes.

With this description of the OCI Security Group, I’ve given you some insights to how we secure the industry’s most technically advanced public cloud. Over the next months, I am eager to delve deeper on the architecture choices and innovations that set us apart. Let the journey of getting to know OCI security begin!





Utilities Test Drive Analytics from Oracle to Manage Influx of Electric Vehicles

Oracle Press Releases - Wed, 2019-05-29 07:00
Press Release
Utilities Test Drive Analytics from Oracle to Manage Influx of Electric Vehicles Advanced analytics help utilities better plan for energy demand as cars move from gas to grid

Redwood Shores, Calif.—May 29, 2019

The use of electric vehicles (EVs) is growing at a record rate, with the International Energy Agency (IEA) predicting that the number of electric cars on the road will rise from 3.1 million in 2017 to 125 million in 2030. Enabling utilities to intelligently manage this new energy demand on the power grid, Oracle Utilities has unveiled a breakthrough in EV detection.

Tapping deep machine learning, Oracle Utilities Analytics Insights is able to identify the presence of an EV, show the time and frequency of charging and disaggregate the energy being consumed by the vehicle with advanced metering infrastructure (AMI) data. With this intelligence, utilities can reliably plan for the energy infusion needed to power EVs at scale and engage customers to charge at the times that are the least expensive for them and best for the health of the energy grid. The new EV detection capabilities from Oracle Utilities Analytics Insights are currently being piloted by a number of utilities.

“With solar, wind and storage technologies now constituting 90 percent of investment interest, the road is paved for deeper decarbonization of the electricity sector,” said Ben Kellison, director grid research, Wood Mackenzie Power & Renewables. “The case for transport electrification has never been stronger and the rapid growth in investment interest from car manufacturers is a confirmation of the future consumer demand for EVs. Utilities are now faced with an increasingly clean and decentralized system and they need new data and analytic packages to support a new planning paradigm.” 

Impact of the EV Explosion on the Energy Grid

The influx of EVs could represent an average additional growth of 1-4 percent in peak load on the grid over the next few decades, according to a report by McKinsey. While this may seem modest, the impact will be highly volatile and cause unpredictable spikes at the local sub-station and feeder levels in residential areas. This load is projected to reach as high as 30 percent peak growth in certain urban areas that are hotspots for EV adoption.

While this transportation development represents an important step forward in reducing carbon emissions, most electricity grids were created long before EVs were a commercially viable consumer product. As transportation continues to evolve from gas to the grid, utilities must plan for an uptick in energy demand that will vary dramatically by area. 

“With almost every major auto manufacturer releasing new EV models in the coming years, the window of time for utilities to act is closing,” said Dan Byrnes, SVP of product development, Oracle Utilities. “The intelligence our analytics provide is essential for utilities to make needed assessments on grid investments and in tandem, work as trusted advisors to customers who may be in the dark as to how owning an EV is impacting their energy footprint and bill. From utility optimization to proven customer engagement, only Oracle offers a complete package to manage the explosion of EVs.”

Powering Better EV Planning and Engagement

The Oracle EV detection capabilities are powered by more than a decade of research and experience disaggregating household energy data from billions of data points collected from 60 million households across 100 utilities. Oracle’s trained data models can be deployed for each specific household’s usage to understand whether a customer has an EV, how they interact with their EV chargers, and where EVs are clustering on the distribution grid. As such, utilities will be able to better plan for and manage the operational impact of EVs as a new distributed energy resource (DER) on the grid.

From a customer perspective, charging an EV can increase a typical household’s energy usage by 15 percent or more and potentially double usage during peak demand times. With the offering, utilities will have the tools to roll-out intuitive, user-friendly EV adoption customer journeys and time-of-use (TOU) plans to engage, educate and reward owners for charging during non-peak times. In the future, these same kinds of engagement programs can also be used for utilities to buy-back unused energy from their customers’ EV batteries to help balance energy supply and demand in times of need.

“EVs will have an impact on every part of a utility’s operations—from grid stability and regulatory affairs to customer billing and engagement,” added Byrnes. “With Oracle, our customers have the tools and intelligence they need to make better decisions, maximize outcomes, and increase customer satisfaction every step of the journey.”

Contact Info
Kristin Reeves
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kristin Reeves

  • +1.925.787.6744

PeopleSoft Adminstrator Podcast: #184 – nVision Performance

David Kurtz - Wed, 2019-05-29 06:42
I recorded a second podcast with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast, this time about nVision.
(10 May 2019) #184 – nVision Performance You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

[Q/A] 1Z0-932 Oracle Cloud Infrastructure Architect Certification Day 1 Training Review

Online Apps DBA - Wed, 2019-05-29 05:22

[Q/A] 1Z0-932 Oracle Cloud Infra Architect: Day1 Review & Feedback Oracle Cloud Infrastructure (OCI) is used everywhere be it for Database, EBS (R12), Peoplesoft, JDEdward, SOA, OBIEE, WebLogic or Unix/Linux Machines. There is a huge demand in the Market for Certified Cloud Architect. Check what 56 Students in our May batch learned on Day1 including […]

The post [Q/A] 1Z0-932 Oracle Cloud Infrastructure Architect Certification Day 1 Training Review appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Ton nuage n’attends que toi

Yann Neuhaus - Wed, 2019-05-29 05:01

Après plusieurs formations offertes par dbi services chez différents fournisseurs de service Cloud et plusieurs retours d’expériences, je suis enthousiaste quand à l’évolution des technologies du numérique. Azure, Amazon AWS et Oracle OCI proposent d’excellents services d’infrastructure dont les noms changes mais les concepts sont les mêmes. On retrouve aussi quelques différences entre les services Cloud infrastructure qui peuvent être intéressants.

Peu être que toi aussi tu es intéressé par le Cloud mais tout ça n’est pas clair. A ça je réponds de voir le Cloud plus comme un data center traditionnel mais au lieu de provisionner tes machines et tes connexions à la main dans ta salle sur-climatisée, tout se fait maintenant que quelque cliques (ou lignes de commandes).

Permet moi de te présenter quelques informations sur le Cloud OCI d’Oracle qui lève les doutes sur les rumeurs et permette d’aller de l’avant.


Il est possible d’avoir des machines physiques et/ou des machines virtuelles. Ces machines physiques et virtuelles sont ensuite déclinées en “shapes” (template):

  • Standard shapes: Un standard pour la plus part des applications
  • DenseIO shapes: Fait pour des charges de travail élevée sur les données
  • GPU shapes: Fait pour utiliser des processeur de carte graphiques (Calcul de Bitcoins, algorthym pour gagner au jeu de Go,…)
  • High performance computing (HPC) shapes: Pour machine physique seullement. Fait pour des besoins massif de calcul CPU en parallele

Temps de provisionnement: quelque minutes


Les possibilités de connexion sont sans limite. Il est notamment possible de:

  • Connecter ton Infrastructure Cloud avec Internet et/ou ton data center
  • Il est possible de connecter le Cloud Oracle avec un autre Cloud (Amazon AWS par example)
  • Il est possible de connecter ton espace Cloud avec ton infrastructure locale avec un lien “FastConnect” offrant une connexion avec une bande passante d’au minimum 10Gb.

Ces options de connexion permettent de répondre à tous les scénarios et évitent d’être bloqué chez Oracle. Tout comme il est possible de convertir ton espace Cloud en extension de ton Data center grâce au “FastConnect”. Je comprends mieux avec un schéma:

Le service base de données autonome (Autonomous Database)

Le service base de données autonome sert 2 types de charges de travail:

  • La charge de travail applicative standard (courte requêtes très fréquentes)
  • La charge de travail pour les entrepôts de données (long traitements unitaires)

Avec une base de données autonome, on ne fait que configurer le nombre de CPUs et la capacité de stockage sans arrêt de service ou dégradation de performance. De plus toutes les opérations suivantes sont prises en charge automatiquement:

  • Création de la base de données
  • Sauvegarde de la base de données
  • Mise à jour des fonctionnalité de la base de données
  • Mise à jour des failles de sécurité et bug de la base de données
  • Optimisation de la base de données

Il est possible de garder ses licences existantes pour le Cloud ou d’utiliser les licences inclues dans le service.

Pays disponibles (pour l’instant)

Liste des pays disponibles pour votre Cloud: https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm

  • Royaume unis
  • Allemagne
  • Canada
  • États-Unis
  • Japon
  • Corée du Sud
  • Suisse (septembre 2019)

Tu peux payer au fur et à mesure de ta consommation ou par un abonnement. Tu peux comparer les prix toi même grâce aux calculettes onlines fournies par les fournisseurs:

  • Microsoft Azure: https://azure.microsoft.com/en-in/pricing/calculator
  • Oracle Cloud: https://cloud.oracle.com/en_US/cost-estimator
  • Amazon AWS: https://calculator.s3.amazonaws.com/index.html

Par exemple pour un serveur virtuel à 8 CPUs et environs 120GB de mémoire tu paieras mensuellement:

  • Microsoft Azure (DS13v2): 648.97 $
  • Oracle Cloud (B88514): 401.00 $
  • Amazon AWS (f1.x2large): 1207.00 $

Si les autres fournisseurs de Cloud étaient loin devant Oracle depuis quelques années, Oracle offre maintenant un service infrastructure (Oracle OCI) solide et facile à comprendre (comme les autres). L’avantage par rapport à d’autre est qu’il est possible d’avoir des machines physiques. Et enfin, nous ne sommes pas lié à Oracle grâce à l’option des réseaux privés ou public entre le cloud Oracle et d’autres fournisseurs de Cloud.

Enfin, même si avoir du cloud te débarrasse de la gestion matériel et fait passé ton temps de provisionnement à quelques minutes, cela n’empêchera pas que tu auras besoin d’ingénieurs système pour gérer les accès, les ressources, l’opérationnel, les sauvegardes et ton plan de reprise d’activité.

J’ai oublié un point? N’hésite pas à me laisser ta question ci-dessous et j’y répondrai avec plaisir.

Cet article Ton nuage n’attends que toi est apparu en premier sur Blog dbi services.

Dell Boomi Training: Day 1 Review/Introduction & Q/As

Online Apps DBA - Wed, 2019-05-29 02:26

[Dell Boomi Training] Day 1 Review/Introduction & Q/As Recently We had our Day1 Session for DellBoomi Training, So Learn with our New blog at https://k21academy.com/dellboomi19 about the review of the interactive live session & Introduction to On-Premise Integration, that covers: ✔ Insights of Dell Boomi Platform & Course Agenda ✔ Introduction to AtomSphere, Atom in […]

The post Dell Boomi Training: Day 1 Review/Introduction & Q/As appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Cloud : Who are the gatekeepers now?

Tim Hall - Wed, 2019-05-29 01:17

There’s something you might consider sinister lurking in the cloud, and it might cause a big disruption in who are considered the gatekeepers of your company’s services. I’ve mentioned governance in passing before, but maybe it’s time for me to do some thinking out loud to get this straight in my own head.

In the on-prem world the IT departments tend to be the gatekeepers, because they are responsible for provisioning, developing and maintaining the systems. If you want some new infrastructure or a new application, you have to go and ask IT, so it’s pretty easy for them to keep a handle on what is going on and stay in control.

Infrastructure as a Service (IaaS)

The initial move to the cloud didn’t really change this. Most people who proudly proclaimed they had moved to the cloud were using Infrastructure as a Service (IaaS), and were really just using the cloud provider as a basic hosting company. I’ve never really considered this cloud. Yes, you get some flexibility in resource allocation, but it’s pretty much what we’ve always done with hosting companies. It’s just “other people’s servers”. As far as IaaS goes, the gatekeepers are still the same, because you need all/most of the same skills to plan, setup and maintain such systems.

Platform as a Service (PaaS)

When we start talking about Platform as a Service (PaaS), things start to get a little bit trickier. The early days of PaaS weren’t a great deal different to IaaS, as some of the PaaS services weren’t what I would call platforms. They were glorified IaaS, with pre-installed software you had to manage yourself. With the emergence of proper platforms, which automate much of the day-to-day drudgery, things started to shift. A developer could request a database without having to speak to the DBAs, sysadmins, virtualisation and network folks. You can of course question the logic of that, but it’s an option and there is the beginning of a power shift.

When we start talking about IoT and Serverless platforms things change big-time. The chances are the gatekeeper will be the budget holder, since you will be charged on a per request basis, and probably have to set a maximum spend per unit time to keep things under control. Depending on how your company manages departmental budgets, the gatekeeper could be whoever has some spare cash this quarter…

Software as a Service (SaaS)

Software as a Service (SaaS) possibly presents the biggest challenge for traditional on-prem IT departments, as the business can literally go out and pick the product they want, without so much of a thought for what IT think about it. Once they’ve spent the money, they will probably come to IT and expect them to magic up all the data integrations to make things work as expected. Also, once that money has been spent, good luck trying to persuade people they backed the wrong horse. SaaS puts the business users directly in the driving seat.


It would be naive to think any movement to the cloud (IaaS, PaaS or SaaS) could be done independently of an existing IT department, but the tide is turning.

The IT world has changed. The traditional power bases are eroding, and you’ve got to adapt to survive. Every time you say “No”, without offering an alternative solution, you’re helping to make yourself redundant. Every time you say, “We will need to investigate it”, as a delaying tactic, you’re helping to make yourself redundant. Every time you ignore new development and delivery pipelines and platforms, you are sending yourself to an early retirement. I’m not saying jump on every bandwagon, but you need to be aware of them, and why they may or may not be useful to you and your company.

Recently I heard someone utter the phrase, “you’re not the only hotel in town”. I love that, and it should be a wake-up call for any traditional IT departments and clouds deniers.

It’s natural selection baby! Adapt or die!



Cloud : Who are the gatekeepers now? was first posted on May 29, 2019 at 7:17 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Latest Blog Posts from Members of the Oracle ACE Program - May 12-18, 2019

OTN TechBlog - Tue, 2019-05-28 16:24
Getting into it...

Spring fever affects different people in different ways. For instance, while most of us were taking in nature or discussing the final episode of Game of Throne, these members of the Oracle ACE Program chose to spend time pounding out more than 50 blog posts. Here's what they produced for the week of May 12-18, 2019.


ACE Director

Oracle ACE Director Clarissa Maman OrfaliClarisa Maman Orfali
Founder/System Engineer, ClarTech Solutions, Inc.
Irvine, CA


Oracle ACE Director Timo HahnTimo Hahn
Principal Software Architect, virtual 7 GmbH


Oracle ACE Dirk NachbarDirk Nachbar
Senior Consultant, Trivadis AG
Bern, Switzerland


Oracle ACE Laurent LeturgezLaurent Leturgez
President/CTO, Premiseo
Lille, France


Oracle ACE Patrick JolliffePatrick Joliffe
Manager, Li & Fung Limited
Hong Kong


Oracle ACE Phill WilkinsPhil Wilkins
Senior Consultant, Capgemini
Reading, United Kingdom


Oracle ACE Phillippe FierensPhilippe Fierens
Oracle DBA on Exadata with OVM
Brussels, Belgium


Oracle ACE Rodrigo MufalaniRodrigo Mufalani
Principal Database Architect, eProseed


Oracle ACE Satishbabu GunukulaSatishbabu Gunukula
Sr. Database Architect, Intuitive Surgical
San Francisco, California


Oracle ACE Sven WellerSven Weller
CEO, Syntegris Information Solutions Gmbh

ACE Associate

Oracle ACE Associate Adam BolinskiAdam Boliński
Oracle Certified Master
Warsaw, Poland


Oracle ACE Associate Adrian WardAdrian Ward
Owner, Addidici
Surrey, United Kingdom


Oracle ACE Associate Bruno Reis da SilvaBruno Reis da Silva
Senior Oracle Database Administrator, IBM
Stockholm, Sweden


Oracle ACE Associate Elise Valin-RakiElise Valin-Raki
Solution Manager, Fennia IT


Oracle ACE Associate Fred DenisFred Denis
Oracle/Exadata DBA, Pythian
Brisbane, Australia


Oracle ACE Associate Sayan MakashinovSayan Malakshinov
Oracle performance Tuning Expert, TMD (TransMedia Dynamics) Ltd.
Aylesbury, United Kingdom


Oracle ACE Simo VilmunenSimo Vilmunen
Technical Architect, Uponor
Toronto, Canada

Additional Resources

Network design for Oracle Cloud Infrastructure

Syed Jaffar - Tue, 2019-05-28 15:48
Assuming, you are planning to migrate your resources from Oracle Cloud Infrastructure Compute classic environment to Oracle Cloud Infrastructure, this blog post explains the details of network design for Cloud Infrastructure environment. It's important to understand and map the network design and details from both environments.

Cloud Inrastructure Compute Classic network has IP Networks and Shared Network model. On other hand, Cloud Infrastructure has Virtual Cloud Networks (VCNs) , Subnets, Availability Domain network model.

Before migration, you must map the network resources between the environments. Source -> Target:
Shared network -> VCN, IP Network -> IP Network, VPN -> IPSec VPN and Fast Connect classic -> FastConnect.

Consider creating below listed network elements in Oracle Cloud Infrastructure:

  • VCN and Subnet CIDR Prefixes
  • DNS Names 
Use the below procedure to configure cloud network for Cloud Infrastructure environment:

  1. Create one or more VCNs.
  2. Create an Internet gateway and/or NAT gateway. An Internet gateway is a virtual router that allows resources in a public subnet direct access the public Internet. A NAT gateway allows resources that don't have a public IP addresses to access the Internet, without exposing those resources to incoming traffic from the Internet.
  3. Configure a service gateway, if required. A service gateway provides a path for private network traffic between your VCN and a public Oracle service such as Oracle Cloud Infrastructure Object Storage.
  4. Create one or more subnets in each VCN.
  5. Configure local peering gateways between VCNs, if required.
  6. Configure security lists, security rules, and route tables for each subnet.


XTended Oracle SQL - Tue, 2019-05-28 13:48

Today I wanted to give a link to the description of v$sql_hint.target_level to show that no_parallel can be specified for statement or object, and though it’s pretty obvious, but surprisingly I haven’t found any articles or posts about it, so this short post describes it.
v$sql_hint.target_level is a bitset, where
1st bit set to 1 means that the hint can be specified on statement level,
2nd – on query block level,
3rd – on object level,
4th – on join level(for multiple objects).
Short example:

   select name,sql_feature
         ,decode(bitand(target_level,1),0,'no','yes') Statement_level
         ,decode(bitand(target_level,2),0,'no','yes') Query_block_level
         ,decode(bitand(target_level,4),0,'no','yes') Object_level
         ,decode(bitand(target_level,8),0,'no','yes') Join_level
   from v$sql_hint h;
with hints as (
   select name,sql_feature
         ,decode(bitand(target_level,1),0,'no','yes') Statement_level
         ,decode(bitand(target_level,2),0,'no','yes') Query_block_level
         ,decode(bitand(target_level,4),0,'no','yes') Object_level
         ,decode(bitand(target_level,8),0,'no','yes') Join_level
   from v$sql_hint h
select *
from hints
where statement_level='yes'
  and to_number(regexp_substr(version,'^\d+')) >= 18
order by version;


----------------- --------------- -------------------- -------- ------------ --------------- ----------------- ------------ ----------
PDB_LOCAL_ONLY    QKSFM_DML       PDB_LOCAL_ONLY       18.1.0              1 yes             no                no           no
SUPPRESS_LOAD     QKSFM_DDL       SUPPRESS_LOAD        18.1.0              1 yes             no                no           no
SYSTEM_STATS      QKSFM_ALL       SYSTEM_STATS         18.1.0              1 yes             no                no           no
MEMOPTIMIZE_WRITE QKSFM_EXECUTION MEMOPTIMIZE_WRITE    18.1.0              1 yes             no                no           no
SKIP_PROXY        QKSFM_ALL       SKIP_PROXY           18.1.0              1 yes             no                no           no
CURRENT_INSTANCE  QKSFM_ALL       CURRENT_INSTANCE     18.1.0              1 yes             no                no           no
JSON_LENGTH       QKSFM_EXECUTION JSON_LENGTH          19.1.0              1 yes             no                no           no
QUARANTINE        QKSFM_EXECUTION QUARANTINE           19.1.0              1 yes             no                no           no
Categories: Development

RMAN Incremental & Demo Part 2 (Level 1)

Zed DBA's Oracle Blog - Tue, 2019-05-28 11:07

This blog post is part of the “RMAN Back to Basics” series, which can be found here.

Incremental Backup

An incremental backup only backup up those data blocks that have changed since the last backup.

Types of Incremental Backups

There are 2 types of Incremental Backups:

  1. Level 0 are a base for subsequent backups.  Copies all blocks containing data similar to a full backup, with the only difference that full backups are never included in an incremental strategy.  Level 0 can be backup sets or image copies.
  2. Level 1 are subsequent backups of a level 0, backing up by default all blocks changed after the most recent level 0 or 1, known as differential incremental backup.  More on different types of level 1 backups is discuss in detail here.
Incremental Level 1 Demo

We take an incremental level 1 backup using my script 6_incremental_level_1.sh:

[oracle@dc1sbxdb001 demo]$ ./6_incremental_level_1.sh 
Step 1: Set environment

Setting the Database Environment using oraenv...
The Oracle base remains unchanged with value /u01/app/oracle

ORACLE_HOME: /u01/app/oracle/product/12.2.0/dbhome_1

Press Enter to continue

The environment is set to my ZEDDBA database, then next the incremental backup (Level 1) is taken:

Step 2: Take Incremental Level 1 Backup

Cotent of 6_incremental_level_1.cmd file:

HOST 'read Press Enter to LIST BACKUP';

Press Enter to continue

Calling 'rman target / cmdfile=/media/sf_Software/scripts/demo/6_incremental_level_1.cmd'

Recovery Manager: Release - Production on Wed May 22 12:08:27 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.

connected to target database: ZEDDBA (DBID=3520942925)

2> HOST 'read Press Enter to LIST BACKUP';
Starting backup at 22-MAY-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=153 device type=DISK
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
channel ORA_DISK_1: starting piece 1 at 22-MAY-19
channel ORA_DISK_1: finished piece 1 at 22-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_22/o1_mf_nnnd1_INCR_LEVEL_1_ggbcfhbp_.bkp tag=INCR LEVEL 1 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 22-MAY-19

Starting Control File and SPFILE Autobackup at 22-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_22/o1_mf_s_1008936514_ggbcflon_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 22-MAY-19

host command complete

We use host within RMAN just to wait for input before moving on for demo purposes.  Next we list the backup within RMAN using ‘LIST BACKUP‘:

List of Backup Sets

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
1 Full 498.93M DISK 00:00:08 16-MAY-19 
BP Key: 1 Status: AVAILABLE Compressed: NO Tag: FULL BACKUP
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_nnndf_FULL_BACKUP_gfv4k119_.bkp
List of Datafiles in backup set 1
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2 Full 8.31M DISK 00:00:00 16-MAY-19 
BP Key: 2 Status: AVAILABLE Compressed: NO Tag: TAG20190516T173912
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_16/o1_mf_s_1008437952_gfv4kjko_.bkp
SPFILE Included: Modification time: 16-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 353836 Ckp time: 16-MAY-19

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
3 77.83M DISK 00:00:01 16-MAY-19 
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_annnn_FULL_BACKUP_PLUS_ARC_gfv4y7m2_.bkp

List of Archived Logs in backup set 3
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 13 332298 16-MAY-19 354044 16-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
4 Full 498.94M DISK 00:00:06 16-MAY-19 
BP Key: 4 Status: AVAILABLE Compressed: NO Tag: TAG20190516T174603
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_nnndf_TAG20190516T174603_gfv4yco3_.bkp
List of Datafiles in backup set 4
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 Full 354058 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 Full 354058 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 Full 354058 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 Full 354058 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
5 3.50K DISK 00:00:00 16-MAY-19 
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_annnn_FULL_BACKUP_PLUS_ARC_gfv4ym4v_.bkp

List of Archived Logs in backup set 5
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 14 354044 16-MAY-19 354066 16-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
6 Full 8.31M DISK 00:00:00 16-MAY-19 
BP Key: 6 Status: AVAILABLE Compressed: NO Tag: TAG20190516T174612
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_16/o1_mf_s_1008438372_gfv4ynmv_.bkp
SPFILE Included: Modification time: 16-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 354077 Ckp time: 16-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
7 Full 8.31M DISK 00:00:00 17-MAY-19 
BP Key: 7 Status: AVAILABLE Compressed: NO Tag: TAG20190517T165453
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_17/o1_mf_s_1008521693_gfxpbfs7_.bkp
SPFILE Included: Modification time: 17-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 458035 Ckp time: 17-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
8 Incr 0 511.64M DISK 00:00:05 20-MAY-19 
BP Key: 8 Status: AVAILABLE Compressed: NO Tag: INCR LEVEL 0
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_20/o1_mf_nnnd0_INCR_LEVEL_0_gg5njx01_.bkp
List of Datafiles in backup set 8
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 0 Incr 571045 20-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 0 Incr 571045 20-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 0 Incr 571045 20-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 0 Incr 571045 20-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
9 Full 8.31M DISK 00:00:00 20-MAY-19 
BP Key: 9 Status: AVAILABLE Compressed: NO Tag: TAG20190520T171324
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_20/o1_mf_s_1008782004_gg5nk4cs_.bkp
SPFILE Included: Modification time: 20-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 571056 Ckp time: 20-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
10 Incr 1 11.22M DISK 00:00:02 22-MAY-19 
BP Key: 10 Status: AVAILABLE Compressed: NO Tag: INCR LEVEL 1
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_22/o1_mf_nnnd1_INCR_LEVEL_1_ggbcfhbp_.bkp
List of Datafiles in backup set 10
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 1 Incr 575174 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 1 Incr 575174 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 1 Incr 575174 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 1 Incr 575174 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
11 Full 8.31M DISK 00:00:00 22-MAY-19 
BP Key: 11 Status: AVAILABLE Compressed: NO Tag: TAG20190522T120834
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_22/o1_mf_s_1008936514_ggbcflon_.bkp
SPFILE Included: Modification time: 22-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 575181 Ckp time: 22-MAY-19

Recovery Manager complete.

Press Enter to continue

Next we look at the file size of the backup piece and we can see the level 1 was just 12M compared to the level 0 which was 512M:

Files size on disk:
total 1.1G
-rw-r-----. 1 oracle oinstall 499M May 16 17:39 o1_mf_nnndf_FULL_BACKUP_gfv4k119_.bkp
-rw-r-----. 1 oracle oinstall 78M May 16 17:46 o1_mf_annnn_FULL_BACKUP_PLUS_ARC_gfv4y7m2_.bkp
-rw-r-----. 1 oracle oinstall 499M May 16 17:46 o1_mf_nnndf_TAG20190516T174603_gfv4yco3_.bkp
-rw-r-----. 1 oracle oinstall 4.0K May 16 17:46 o1_mf_annnn_FULL_BACKUP_PLUS_ARC_gfv4ym4v_.bkp

total 512M
-rw-r-----. 1 oracle oinstall 512M May 20 17:13 o1_mf_nnnd0_INCR_LEVEL_0_gg5njx01_.bkp

total 12M
-rw-r-----. 1 oracle oinstall 12M May 22 12:08 o1_mf_nnnd1_INCR_LEVEL_1_ggbcfhbp_.bkp

Press Enter to continue

Finally, we update the demo log table:

Step 3: Updating and viewing demo log
Calling 'sqlplus / as sysdba'
To updated and view demo log

1 row created.

Commit complete.

------------------------------ --------------------------------------------------
16-MAY-19 PM Enable Archive Log Mode
16-MAY-19 PM Full Backup
16-MAY-19 PM Full Backup plus Archive Logs
17-MAY-19 PM Image Copy
20-MAY-19 PM Incremental Level 0
22-MAY-19 PM Incremental Level 1

6 rows selected.

Press Enter to exit shell script

[oracle@dc1sbxdb001 demo]$
Reference Scripts
  1. 6_incremental_level_1.sh
  2. 6_incremental_level_1.cmd

To download all 2 in one zip: 6_incremental_level_1.zip

The rest of the series
  1. Oracle Database File Placement Best Practice & Create Database Demo
  2. RMAN Full Backup & Demo
  3. RMAN Image Copy & Demo
  4. RMAN Incremental & Demo Part 1 (Level 0)
  5. RMAN Incremental & Demo Part 2 (Level 1)
  6. RMAN Incremental with Block Change Tracking & Demo
  7. RMAN Incremental Differential vs Cumulative & Demo
  8. RMAN Incremental Updating Backup & Demo Part 1
  9. RMAN Incremental Updating Backup & Demo Part 2
  10. Flashback
  11. RMAN Block Media Recovery
  12. RMAN Recover database with only FRA
  13. RMAN Obsolete

Please Note: Links to the blog posts will be released daily and updated here.


If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.


Zed DBA (Zahid Anwar)

Categories: DBA Blogs

Migrating Oracle Cloud Infrastructure Classic Workloads to Oracle Cloud Infrastructure - Migration Tools

Syed Jaffar - Tue, 2019-05-28 03:22
If you are planning to migrate your resources from Oracle Cloud Infrastructure Classic to Oracle Cloud Infrastructure, Oracle provides variety of tools to achieve this. In this blog post will discuss some of thetools which can be used to migrate Oracle Cloud Infrastructure Classic Workload resources to Oracle Cloud Infrastructure. 

The tools below can be used to identify resources from Oracle cloud infrastructure Classic environments and to migrate to Oracle Cloud Infrastructure tenancy. Using these tools, one can setup required network, and migrate VMs and block storage volumes over the targeted systems.

Tools for migrating infrastructure resources : Compute, VMs and Block Storage

  • Oracle Cloud Infrastructure Classic Discovery and Translation Tool: as the name explains, it is a discovery tool, which assist discovering the resources of different resources in your Cloud Infrastructure Classic environments, such as : compute Classic, Object Storage classic, load balancing classic account. Additionally, it is capable of reporting the items in the specified environment, list of VMs in the source environment and also networking information of the source system.
  • Oracle Cloud Infrastructure Classic VM and Block Storage Tool: This tool automates the process of migrating VMs and Block Storage over to the target environment. 
  • Oracle Cloud Infrastructure Classic Block Volume Backup and Restore Tool: This tool used to migrate your remote snapshot of storage volumes as well as scheduled backups. 

Tools for migrating databases

To migrate databases to Oracle cloud infrastructure, you can use Oracle Cloud Infrastructure Class Database Migration tool. This tool uses Oracle RMAN to create backup and restore the database over the targeted system.

Alternatively, Oracle Data Guard solution also can be used to migrate single or RAC databases to Oracle Cloud Infrastructure.

Tools for migrating Object Storage

  • rclone command is used to migrate your object storage data f you don't use the Oracle Cloud Infrastructure Storage Software Appliance.
  • If Oracle Cloud Infrastructure Storage Software Appliance is used to store your object data, then you can migrate your data to your Oracle Cloud Infrastructure Object Storage account by using the Oracle Cloud Infrastructure Storage Gateway.

SUSE Expert Day Zürich

Yann Neuhaus - Tue, 2019-05-28 01:44

On May 16th I visited the SUSE Expert Day in Zürich.
An interesting Agenda was waiting for me, all under the topic: “My kind of open”

After a small welcome cafe, SUSE started with the Keynote of the Markus Wolf (Country Manager and Sales Director ALPS Region). After a short introduction of the Swiss SUSE Team, he talked about the IT Transformation and his vision of the next years in IT – nice to hear, that IT is getting even more complex as it is now.
One slide that really impressed me:

Amazing, isn’t it?

As a customer story, Nicolas Christener, CEO and CTO of the adfinis sygroup showed with an impressive example, what you can reach with the SUSE Cloud Application Platform and what matters for the end customer. He also mentioned the great collaboration with SUSE during the project. I think that’s really nice to know that you get the help and support of SUSE that is needed, especially in new pioneer projects.

As third speaker Bo Jin (Sales Engineer and Consultant at SUSE ) was on stage. Really impressive knowledge! He told a lot about CloundFoundry as well as a lot about Kubernetes, Cloud Application Platform, CaaS. A highlight for me was his really impressive demo about pushing code to Clound Goundry and how to deploy from GitHub into a container. Everything seems to be really easy to manage.

Last but not least we got some insight to the SUSE Manager, how it could help you to centralize the system administration, the patch handling as well as autoyast profiles and kickstart files. This tool is suitable for Ubuntu, CentOS, Red Hat and, of course, SUSE servers. Everything centrally handled for almost all distributions. That makes life much easier.
Bo Jin also showed us the kernel live patching in a demo and gave us some background information. Did you for example know, that even if you have Kernel Live Patching enabled, you have to reboot at least once in 12 month?

In a nutshell – nice to see how pationate and innovative SUSE is, they presented great tools. Even they were only able to show us the tools mostly in scope at the moment – can’t wait to test them!

Cet article SUSE Expert Day Zürich est apparu en premier sur Blog dbi services.

Clarification of Application Management Pack for Oracle Utilities

Anthony Shorten - Mon, 2019-05-27 19:27

As the Product Manager for a number of products including Oracle Application Management Pack for Oracle Utilities, the Oracle Enterprise Manager plugin for managing Oracle Utilities Application Framework product there are always a number of requests that come across my desk that need clarification. I wanted to post a few responses to some common issues we are seeing in the field and how to address them.

  • Read the Installation Documentation. This is obvious but the installation documentation not only talks about the installation of the pack but also what features to enable on the Oracle Utilities Application Framework to maximize the benefits of the pack. For customers on older versions of the Oracle Utilities Application Framework, some of the advanced features of the pack are not available as those versions of the Oracle Utilities Application Framework do not exist. For example, most of the metrics were added in Oracle Utilities Application Framework 4.x.
  • Clustering Support Changes. In earlier versions of Oracle Utilities Application Framework with older versions of Oracle WebLogic, to use clustering you needed to use the localhost hostname. Whilst this worked for those versions (and has been used with later version), Oracle recommends to use the actual host name in the configuration and in particular for the JMX integration used by the Oracle Utilities Application Framework. Using localhost in the host name may prevent Oracle Enterprise Manager and the pack recognizing the active ports and will result in the target not being recognized.
  • Native Install Transition. A few years ago, the Oracle Utilities Application Framework transitioned to use Oracle WebLogic natively rather than the original embedded mode which was popular in legacy versions of the product. The main reason for moving to native mode was to allow customers full flexibility when using the Oracle WebLogic Domain so that the license value was increased and advanced configuration was supported. To support that transition a decision was made in respect to the pack:
    • Embedded Mode Customers. It is recommended that customers still on the old embedded mode that have not transitioned to a native installation, yet, use the Start / Stop functionality on the Oracle Utilities targets to manage the availability of Oracle Utilities targets. 
    • Native Mode Customers. Customers who have transitioned to the native installation are recommended to use the Oracle WebLogic targets to start and stop the product (as this is one of the benefits of the native mode). It is NOT recommended to use the start and stop functionality on the Oracle Utilities targets during the transition period. The current release of Enterprise Manager and the Application Management Pack for Oracle Utilities cannot manage cross targets at the present moment.

Essentially, if you embedded mode then use the start/stop on the Oracle Utilities targets, if you are native use the start/stop on the Oracle WebLogic targets.

Choiceology with Katy Milkman

Michael Dinh - Mon, 2019-05-27 15:15

Good listening I thought about sharing with you.

RSS Feed:
Choiceology with Katy Milkman – Exposing the psychological traps that lead to expensive mistakes

You can listen and subscribe to Choiceology with Katy Milkman for free in any podcast player—such as Apple Podcasts, Google Podcasts or Spotify.
How Do I Listen to Choiceology?

Re-partitioning 2

Jonathan Lewis - Mon, 2019-05-27 14:20

Last week I wrote a note about turning a range-partitioned table into a range/list composite partitioned table using features included in 12.2 of Oracle. But my example was really just an outline of the method and bypassed a number of the little extra problems you’re likely to see in a real-world system, so in this note I’m going to bring in an issue that you might run into – and which I’ve seen appearing a number of times: ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION.

It’s often the case that a system has a partitioned table that’s been around for a long time, and over its lifetime it may have had (real or virtual) columns added, made inivisble, dropped, or mark unused. As a result you may find that the apparent definition of the table is not the same as the real definition of the table – and that’s why Oracle has given us (in 12c) the option to “create table for exchange”.

You might like to read a MoS note giving you one example of a problem with creating an exchange table prior to this new feature. ORA-14097 At Exchange Partition After Adding Column With Default Value (Doc ID 1334763.1) I’ve created a little model by cloning the code from that note.

rem     Script:         pt_exchange_problem.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2019

create table mtab (pcol number)
partition by list (pcol) (
        partition p1 values (1),
        partition p2 values (2)

alter table mtab add col2 number default 0 not null;

prompt  ========================================
prompt  Traditional creation method => ORA-14097
prompt  ========================================

create table mtab_p2 as select * from mtab where 1=0;
alter table mtab exchange partition P2 with table mtab_p2;

prompt  ===================
prompt  Create for exchange
prompt  ===================

drop table mtab_p2 purge;
create table mtab_p2 for exchange with table mtab;
alter table mtab exchange partition P2 with table mtab_p2;


Here's the output from running this on an instance of 18.3

Table created.

Table altered.

Traditional creation method => ORA-14097

Table created.

alter table mtab exchange partition P2 with table mtab_p2
ERROR at line 1:
ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION

Create for exchange

Table dropped.

Table created.

Table altered.

So we don’t have to worry about problems creating an exchange table in Oracle 12c or later. But we do still have a problem if we’re trying to convert our range-partitioned table into a range/list composite partitioned table by doing using the “double-exchange” method. In my simple example I used a “create table” statement to create an empty table that we could exchange into; but without another special version of a “create table” command I won’t be able to create a composite partitioned table that is compatible with the simple table that I want to use as my intermediate table.

Here’s the solution to that problem – first in a thumbnail sketch:

  • create a table for exchange (call it table C)
  • alter table C modify to change it to a composite partitioned table with one subpartition per partition
  • create a table for exchange (call it table E)
  • Use table E to exchange partitions from the original table to the (now-partitioned) table C
  • Split each partition of table C into the specific subpartitions required

And now some code to work through the details – first the code to create and populate the partitioned table.

rem     Script:         pt_comp_from_pt_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2019

drop table t purge;
drop table pt_range purge;
drop table pt_range_list purge;

-- @@setup

create table pt_range (
        id              number(8,0)     not null,
        grp             varchar2(1)     not null,
        small_vc        varchar2(10),
        padding         varchar2(100)
partition by range(id) (
        partition p200 values less than (200),
        partition p400 values less than (400),
        partition p600 values less than (600)

insert into pt_range
	rownum <= 600 -- > comment to avoid WordPress format issue


Then some code to create the beginnings of the target composite partitioned table. We create a simple heap table “for exchange”, then modify it to be a composite partitioned table with a named starting partition and high_value and a template defining a single subpartition then, as a variant on the example from last week, specifying interval partitioning.

prompt	==========================================
prompt	First nice feature - "create for exchange"
prompt	==========================================

create table pt_range_list for exchange with table pt_range;

prompt	============================================
prompt	Now alter the table to composite partitioned
prompt	============================================

alter table pt_range_list modify
partition by range(id) interval (200)
subpartition by list (grp) 
subpartition template (
        subpartition p_def      values(default)
	partition p200 values less than (200)

If you want to do the conversion from range partitioning to interval partitioning you will have to check very carefully that your original table will be able to convert safely – which means you’ll need to check that the “high_value” values for the partitions are properly spaced to match the interval you’ve defined and (as a special requirement for the conversion) there are no omissions from the current list of high values. If your original table doesn’t match these requirement exactly you may end up trying to exchange data into a partition where it doesn’t belong; for example, if my original table had partitions with high value of 200, 600, 800 then there may be values in the 200-399 range currently stored in the original “600” range partition which shouldn’t go into the new “600” interval partition. You may find you have to split (and/or merge) a few partitions in your range-partitioned table before you can do the main conversion.

Now we create create the table that we’ll actually use for the exchange and go through each exchange in turn. Because I’ve got an explicitly named starting partition the first exchange takes only two steps – exchange out, exchange in. But because I’m using interval partitioning in the composite partitioned table I’m doing a “lock partition” before the second exchange on all the other partitions as this will bring the required target partition into existence. I’m also using the “[sub]partition for()” syntax to identify the pairs of [sub]partitions – this isn’t necessary for the original range-partitioned table, of course, but it’s the only way I can identify the generated subpartitions that will appear in the composite partitioned table.

create table t for exchange with table pt_range;

prompt	=======================================================================
prompt	Double exchange to move a partition to become a composite subpartition
prompt	Could drive this programatically by picking one row from each partition
prompt	=======================================================================

alter table pt_range exchange partition p200 with table t;
alter table pt_range_list exchange subpartition p200_p_def with table t;

alter table pt_range exchange partition for (399) with table t;
lock  table pt_range_list partition for (399) in exclusive mode;
alter table pt_range_list exchange subpartition for (399,'0') with table t;

alter table pt_range exchange partition for (599) with table t;
lock  table pt_range_list partition for (599) in exclusive mode;
alter table pt_range_list exchange subpartition for (599,'0') with table t;

prompt	=====================================
prompt	Show that we've got the data in place
prompt	=====================================

execute dbms_stats.gather_table_stats(user,'pt_range_list',granularity=>'ALL')

break on partition_name skip 1

select  partition_name, subpartition_name, num_rows 
from    user_tab_subpartitions 
where   table_name = 'PT_RANGE_LIST'
order by
        partition_name, subpartition_name

Now that the data is in the target table we can split each default subpartition into the four subpartitions that we want for each partition. To cater for the future, though, I’ve first modified the subpartition template so that each new partition will have four subpartitions (though the naming convention won’t be applied, of course, Oracle will generate system name for all new partitions and subpartitions).

prompt  ================================================
prompt  Change the subpartition template to what we want
prompt  ================================================

alter table pt_range_list
set subpartition template(
        subpartition p_0 values (0),
        subpartition p_1 values (1),
        subpartition p_2 values (2),
        subpartition p_def values (default)

prompt  ====================================================
prompt  Second nice feature - multiple splits in one command
prompt  Again, first split is fixed name.
prompt  We could do this online after allowing the users in
prompt  ====================================================

alter table pt_range_list split subpartition p200_p_def
        into (
                subpartition p200_p_0 values(0),
                subpartition p200_p_1 values(1),
                subpartition p200_p_2 values(2),
                subpartition p200_p_def

alter table pt_range_list split subpartition for (399,'0')
        into (
                subpartition p400_p_0 values(0),
                subpartition p400_p_1 values(1),
                subpartition p400_p_2 values(2),
                subpartition p400_p_def

alter table pt_range_list split subpartition for (599,'0')
        into (
                subpartition p600_p_0 values(0),
                subpartition p600_p_1 values(1),
                subpartition p600_p_2 values(2),
                subpartition p600_p_def

Finally a little demonstration that we can’t add an explicitly named partition to the interval partitioned table; then we insert a row to generate the partition and show that it has 4 subpartitions.

Finishing off we rename everything (though that’s a fairly pointless exercise).

prompt  ==============================================================
prompt  Could try adding a partition to show it uses the new template
prompt  But that's not allowed for interval partitions: "ORA-14760:"
prompt  ADD PARTITION is not permitted on Interval partitioned objects
prompt  So insert a value that would go into the next (800) partition
prompt  ==============================================================

alter table pt_range_list add partition p800 values less than (800);

insert into pt_range_list (
        id, grp, small_vc, padding
values ( 
        799, '0', lpad(799,10,'0'), rpad('x',100,'x')


prompt  ===================================================
prompt  Template naming is not used for the subpartitions,
prompt  so we have to use the "subpartition for()" strategy 
prompt  ===================================================

alter table pt_range_list rename subpartition for (799,'0') to p800_p_0;
alter table pt_range_list rename subpartition for (799,'1') to p800_p_1;
alter table pt_range_list rename subpartition for (799,'2') to p800_p_2;
alter table pt_range_list rename subpartition for (799,'3') to p800_p_def;

prompt  ==============================================
prompt  Might as well clean up the partition names too
prompt  ==============================================

alter table pt_range_list rename partition for (399) to p400;
alter table pt_range_list rename partition for (599) to p600;
alter table pt_range_list rename partition for (799) to p800;

prompt  =======================================
prompt  Finish off by listing the subpartitions 
prompt  =======================================

execute dbms_stats.gather_table_stats(user,'pt_range_list',granularity=>'ALL')

select  partition_name, subpartition_name, num_rows 
from    user_tab_subpartitions 
where   table_name = 'PT_RANGE_LIST'
order by
        partition_name, subpartition_name

It’s worth pointing out that you could do the exchanges (and the splitting and renaming at the same time) through some sort of simple PL/SQL loop – looping through the named partitions in the original table and using a row from the first exchange to drive the lock and second exchange (and splitting and renaming). For exanple something like the following which doesn’t have any of the error-trapping and defensive mechanisms you’d want to use on a production system:

        m_pt_val number;
        for r in (select partition_name from user_tab_partitions where table_name = 'PT_RANGE' order by partition_position) 
                execute immediate
                        'alter table pt_range exchange partition ' || r.partition_name ||
                        ' with table t';
                select id into m_pt_val from t where rownum = 1;
                execute immediate 
                        'lock table pt_range_list partition for (' || m_pt_val || ') in exclusive mode';
                execute immediate
                        'alter table pt_range_list exchange subpartition  for (' || m_pt_val || ',0)' ||
                        ' with table t';
        end loop;

If you do go for a programmed loop you have to be really careful to consider what could go wrong at each step of the loop and how your program is going to report (and possibly attempt to recover) the situation. This is definitely a case where you don’t want code with “when others then null” appearing anywhere, and don’t be tempted to include code to truncate the exchange table.



Subscribe to Oracle FAQ aggregator