Feed aggregator

An exotic feature in the content server: check_client_version

Yann Neuhaus - Sun, 2019-06-16 12:26
An exotic feature in the content server: check_client_version

A few months ago, I tripped over a very mysterious problem while attempting to connect to a 7.3 CS docbase from within dqMan.
We had 3 docbases and we could connect using this client to all of them but one ! Moreover, we could connect to all three using a remote Documentum Administrator or the local idql/iapi command-line tools. Since we could connect to at least one of them with dqMan, this utility was not guilty. Also, since all three docbases accepted connections, they were all OK in this respect. Ditto for the account used, dmadmin or nominative ones; local connections were possible hence the accounts were all active and, as they could be used from within the remote DA, their identification method and password were correct too.
We tried connecting from different workstations in order to check the dqMan side, we cleared its caches, we reinstalled it, but to no avail. We checked the content server’s log, as usual nothing relevant. It was just the combination of this particular docbase AND dqMan. How strange !
So what the heck was wrong here ?
As we weren’t the only administrators of those repositories, we more or less suspecting someone else change some setting but which one ? Ok, I sort of gave it away in the title but please bear with me and read on.
I don’t remember exactly how, we were probably working in panic mode, but we eventually decided to compare the docbases’ dm_docbase_config object side by side as shown below (with some obfuscation):

paste <(iapi bad_boy -Udmadmin -Pxxx <<eoq | awk '{print substr($0, 1, 80)}'
retrieve,c,dm_docbase_config
dump,c,l
quit
eoq
) <(iapi good_boy -Udmadmin -Pxxx <<eoq | awk '{print substr($0, 1, 80)}'
retrieve,c,dm_docbase_config
dump,c,l
quit
eoq
) | column -c 30 -s $'\t' -t | tail +11 | head -n 48
USER ATTRIBUTES                                          USER ATTRIBUTES
  object_name                     : bad_boy                object_name                     : good_boy
  title                           : bad_boy Repository     title                           : good_boy Global Repository
  subject                         :                        subject                         :
  authors                       []:                        authors                       []: 
  keywords                      []:                        keywords                      []: 
  resolution_label                :                        resolution_label                :
  owner_name                      : bad_boy                owner_name                      : good_boy
  owner_permit                    : 7                      owner_permit                    : 7
  group_name                      : docu                   group_name                      : docu
  group_permit                    : 5                      group_permit                    : 5
  world_permit                    : 3                      world_permit                    : 3
  log_entry                       :                        log_entry                       :
  acl_domain                      : bad_boy                acl_domain                      : good_boy
  acl_name                        : dm_450xxxxx80000100    acl_name                        : dm_450xxxxx580000100
  language_code                   :                        language_code                   :
  mac_access_protocol             : nt                     mac_access_protocol             : nt
  security_mode                   : acl                    security_mode                   : acl
  auth_protocol                   :                        auth_protocol                   :
  index_store                     : DM_bad_boy_INDEX       index_store                     : DM_good_boy_INDEX
  folder_security                 : T                      folder_security                 : T
  effective_date                  : nulldate               effective_date                  : nulldate
  richmedia_enabled               : T                      richmedia_enabled               : T
  dd_locales                   [0]: en                     dd_locales                   [0]: en
  default_app_permit              : 3                      default_app_permit              : 3
  oldest_client_version           :                        oldest_client_version           :
  max_auth_attempt                : 0                      max_auth_attempt                : 0
  client_pcaching_disabled        : F                      client_pcaching_disabled        : F
  client_pcaching_change          : 1                      client_pcaching_change          : 1
  fulltext_install_locs        [0]: dsearch                fulltext_install_locs        [0]: dsearch
  offline_sync_level              : 0                      offline_sync_level              : 0
  offline_checkin_flag            : 0                      offline_checkin_flag            : 0
  wf_package_control_enabled      : F                      wf_package_control_enabled      : F
  macl_security_disabled          : F                      macl_security_disabled          : F
  trust_by_default                : T                      trust_by_default                : T
  trusted_docbases              []:                        trusted_docbases              []: 
  login_ticket_cutoff             : nulldate               login_ticket_cutoff             : nulldate
  auth_failure_interval           : 0                      auth_failure_interval           : 0
  auth_deactivation_interval      : 0                      auth_deactivation_interval      : 0
  dir_user_sync_on_demand         : F                      dir_user_sync_on_demand         : F
  check_client_version            : T                      check_client_version            : F
  audit_old_values                : T                      audit_old_values                : T
  docbase_roles                 []:                        docbase_roles                [0]: Global Registry
  approved_clients_only           : F                      approved_clients_only           : F
  minimum_owner_permit            : 2                      minimum_owner_permit            : 0
  minimum_owner_xpermit           :                        minimum_owner_xpermit           :
  dormancy_status                 :                        dormancy_status                 :

The only significant differences were the highlighted ones and the most obvious one was the attribute check_client_version, it was turned on in the bad_boy repository. Now that we finally had something to blame, the universe started making sense again ! We quickly turned this setting to false and could eventually connect to that recalcitrant docbase. But the question is still open: check against what ? What criteria was applied to refuse dqman access to bad_boy but to allow it to good_boy ? That was still not clear, even though we could work around it.
Now, who and why turned it on, that had to remain a mystery.
While we were at it, we also noticed another attribute which seemed to be related to the previous one: oldest_client_version.
Was there any other client_% attribute in dm_docbase_config ?

paste <(iapi good_boy -Udmadmin -Pdmadmin <<eoq | grep client
retrieve,c,dm_docbase_config
dump,c,l
quit
eoq) <(iapi bad_boy -Udmadmin -Pdmadmin <<eoq | grep client
retrieve,c,dm_docbase_config
dump,c,l
quit
eoq) | column -s $'\t' -t
  oldest_client_version           :      oldest_client_version           : 
  client_pcaching_disabled        : F    client_pcaching_disabled        : F
  client_pcaching_change          : 1    client_pcaching_change          : 1
  check_client_version            : F    check_client_version            : T
  approved_clients_only           : F    approved_clients_only           : F

Yes, but they looked quite harmless in the current context.
Thus, the relevant attributes here are check_client_version and oldest_client_version. Let’s discover a bit more about them.

Digging

As usual, the documentation is a bit scketchy about these attributes:

check_client_version Boolean S T means that the repository
                               servers will not accept connections
                               from clients older than the
                               version level specified in the
                               oldest_client_version property.
                               F means that the servers accept
                               connections from any client version.
                               The default is F.

oldest_client _version string(32) S Version number of the oldest
                                    Documentum client that will access
                                    this repository.
                                    This must be set manually. It is used
                                    by the DFC to determine how to
                                    store chunked XML documents. If
                                    check_client_version is set to T,then
                                    this value is also used to identify the
                                    oldest client version level that may
                                    connect to the repository.

But what is the client version ? Logically, it is the version of its DfCs or, for older clients, the version of the dmcl shared library.
So, if check_client_version is true, the client version is checked and if it is older than the one defined in oldest_client_version, the client is forbidden to connect. That makes sense except that in our case, oldest_client_version was empty. Maybe in such a case, the client has to match exactly the content server’s DfC version ? As dqMan was either using the dmcl40.dll library or an old Dfc version, it was rejected. Let’s verify these hypothesis with a 16.4 target repository.
Connecting from an ancient 5.3 client
We exhumed an old 5.3 CS installation to use its client part with the default configuration in the target docbase:

dmadmin@osboxes:~/documentum53$ idql dmtest -Udmadmin -Pdmadmin
 
 
Documentum idql - Interactive document query interface
(c) Copyright Documentum, Inc., 1992 - 2004
All rights reserved.
Client Library Release 5.3.0.115 Linux
 
 
Connecting to Server using docbase dmtest
[DM_SESSION_I_SESSION_START]info: "Session 0100c35080003913 started for user dmadmin."
 
 
Connected to Documentum Server running Release 16.4.0080.0129 Linux64.Oracle

Fine so far.
Let’s activate the dm_docbase_config.check_client_version in the target:

retrieve,c,dm_docbase_config
...
set,c,l,check_client_version
SET> T
...
OK
API> save,c,l
...
[DM_DCNFG_E_CANT_SAVE]error: "Cannot save dmtest docbase_config."
 
[DM_DCNFG_E_SET_OLDEST_CLIENT_VERSION_FIRST]error: "The docbase_config object attribute oldest_client_version has to be set before setting attribute check_client_version to T."

Interesting. At that time, this attribute was empty and yet the check_client_version was active. Is this constraint new in 16.4 or did the unknow administrator hack around this ? As I don’t have a 7.x repository available right now, I cannot test this point.
Let’s play by the rules and set oldest_client_version:

reset,c,l
set,c,l,oldest_client_version
16.4
save,c,l
OK
set,c,l,check_client_version
SET> T
...
OK
API> save,c,l
...
OK

Try connecting from the 5.3 client: still OK.
Maybe a reinit is necessary to actuate the changes:

reinit,c

Try again:

dmadmin@osboxes:~/documentum53$ idql dmtest -Udmadmin -Pdmadmin
&nbps;
&nbps;
Documentum idql - Interactive document query interface
(c) Copyright Documentum, Inc., 1992 - 2004
All rights reserved.
Client Library Release 5.3.0.115 Linux
 
 
Connecting to Server using docbase dmtest
Could not connect
[DM_SESSION_E_START_FAIL]error: "Server did not start session. Please see your system administrator or check the server log.
Error message from server was:
[DM_SESSION_E_AUTH_FAIL]error: "Authentication failed for user dmadmin with docbase dmtest."
 
"

So a reinit it required indeed.
Note the misleading error, it is not the authentication that is wrong but the client version validation. It is such wrong messages that make diagnosis of Documentum problems so hard and time-consuming. Anyway, let’s revert the check_client_version to F:

set,c,l,check_client_version
F
save,c,l
reinit,c

Try connecting: OK. So, the client version filtering is effective. Let’s try it with a 5.3 client version:

API> set,c,l,oldest_client_version
SET> 5.3
...
OK
API> save,c,l
...
OK
API> set,c,l,check_client_version
SET> T
...
OK
API> save,c,l
...
OK
API> reinit,c
...
OK

Try connecting: OK, that’s expected.
Let’s try it for a minimum 5.2 client version: it still works, which is expected too since the test client’s version is 5.3 and in my books 5.3 > 5.2.
Let’s try it for a miminum a 5.4 client version: the connection fails, so client version checking works as expected here.
Let’s try it for a miminum a 20.0 client version: the connection fails as expected. No check on the version’s value is done, which is quite understandable programmatically speaking, although a bit optimistic in the context of the turmoil Documentum went through lately.
Let’s go back to a more realistic value:

API> set,c,l,oldest_client_version
SET> 7.2
...
[DM_SESSION_E_AUTH_FAIL]error: "Authentication failed for user dmadmin with docbase dmtest."
 
 
API> save,c,l

Oops, interestingly, the last change did not make it because with the current setting so down the way into the future, the present client’s session was disconnected and there is no way to reconnect in order to revert it !
Let’s do the rollback from the database level directly:

sqlplus dmtest@orcl
 
SQL*Plus: Release 12.2.0.1.0 Production on Mon Jun 10 16:25:56 2019
 
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
Enter password:
Last Successful login time: Mon Jun 10 2019 16:25:40 +02:00
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 
SQL> update dm_docbase_config_s set check_client_version = 0;
 
1 row updated.
SQL> commit;
 
Commit complete.
 
quit;

Try to connect:

iapi dmtest@docker:1489
Please enter a user (dmadmin):
Please enter password for dmadmin:
 
 
OpenText Documentum iapi - Interactive API interface
Copyright (c) 2018. OpenText Corporation
All rights reserved.
Client Library Release 16.4.0070.0035
 
 
Connecting to Server using docbase dmtest
[DM_SESSION_E_AUTH_FAIL]error: "Authentication failed for user dmadmin with docbase dmtest."

Still not ok because the reinit is missing but for this we need to connect which we still cannot because of the missing reinit. To break this catch-22 situation, let’s cut the gordian knot and kill the dmtest docbase’s processes:

dmadmin@docker:~$ ps ajxf | grep dmtest
1 27843 27843 27843 ? -1 Ss 1001 0:00 ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
27843 27849 27843 27843 ? -1 S 1001 0:00 \_ /app/dctm/product/16.4/bin/mthdsvr master 0xe901fd2f, 0x7f8a50658000, 0x223000 50000 5 27843 dmtest /app/dctm/dba/log
27849 27850 27843 27843 ? -1 Sl 1001 0:03 | \_ /app/dctm/product/16.4/bin/mthdsvr worker 0xe901fd2f, 0x7f8a50658000, 0x223000 50000 5 0 dmtest /app/dctm/dba/log
27849 27861 27843 27843 ? -1 Sl 1001 0:03 | \_ /app/dctm/product/16.4/bin/mthdsvr worker 0xe901fd2f, 0x7f8a50658000, 0x223000 50000 5 1 dmtest /app/dctm/dba/log
27849 27874 27843 27843 ? -1 Sl 1001 0:03 | \_ /app/dctm/product/16.4/bin/mthdsvr worker 0xe901fd2f, 0x7f8a50658000, 0x223000 50000 5 2 dmtest /app/dctm/dba/log
27849 27886 27843 27843 ? -1 Sl 1001 0:03 | \_ /app/dctm/product/16.4/bin/mthdsvr worker 0xe901fd2f, 0x7f8a50658000, 0x223000 50000 5 3 dmtest /app/dctm/dba/log
27849 27899 27843 27843 ? -1 Sl 1001 0:03 | \_ /app/dctm/product/16.4/bin/mthdsvr worker 0xe901fd2f, 0x7f8a50658000, 0x223000 50000 5 4 dmtest /app/dctm/dba/log
27843 27862 27843 27843 ? -1 S 1001 0:00 \_ ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
27843 27863 27843 27843 ? -1 S 1001 0:00 \_ ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
27843 27875 27843 27843 ? -1 S 1001 0:00 \_ ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
27843 27887 27843 27843 ? -1 S 1001 0:00 \_ ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
27843 27901 27843 27843 ? -1 S 1001 0:00 \_ ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
27843 27944 27843 27843 ? -1 Sl 1001 0:06 \_ ./dm_agent_exec -docbase_name dmtest.dmtest -docbase_owner dmadmin -sleep_duration 0
27843 27962 27843 27843 ? -1 S 1001 0:00 \_ ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini

and:

kill -9 -27843

After restarting the docbase, the connectivity was restored.
So, be cautious while experimenting ! Needless to say, avoid doing it in a production docbase or in any heavily used development docbase for that matter, or the wrath of the multiverses and beyond will fall upon you and you will be miserable for ever.
Connecting from a 7.3 client
The same behavior and error messages as with the precedent 5.3 client were observed with a more recent 7.3 client and, inferring from the incident above, later clients behave the same way.

Conclusion

We never stop learning stuff with Documentum ! While this client version limiting feature looks quite exotic, it may make sense in order to avoid surprises or even corruptions when using newly implemented extensions or existing but changed areas of the content server. It is possible that new versions of the DfCs behave differently from older ones in dealing with the same functionalities and Documentum had no better choice but to cut the older versions off to prevent any conflict. As usual, the implementation looks a bit hasty with inapt error messages costing hours of investigation and the risk to cut oneself off a repository.

Cet article An exotic feature in the content server: check_client_version est apparu en premier sur Blog dbi services.

Installing the OE demo schema on 18cXE

The Anti-Kyte - Sat, 2019-06-15 13:18

It’s always a good idea to be wary of assuming too much.
Looking at the current Conservative Party Leadership contest, you might assume that a fantasy Brexit policy and a history of class A drug use were pre-requisites for the job of Prime Minister.
You may further assume that one is a result of the other.
That last assumption is unlikely however, unless the smoking, snorting and otherwise ingesting of illicit substances is widespread across all of the other major political parties. Then again…

For my part, I’ve had to correct some of my assumptions about the process for installing the OE sample schema into Oracle 18cXE running on CentOS 7.
What follows is a quick guide on how to accomplish this…without all the head-scratching over apparently spurious errors.
Specifically, I will be covering :

  • getting the OE schema installation scripts
  • checking the pre-requisites for the OE schema installation have been met
  • preparing the scripts for execution
  • performing the installation

Before we go too much further though, it’s probably wise to state some assumptions…

Assumptions

These steps assume that you’re running 18cXE on CentOS or some other Red Hat compatible distro ( e.g. Oracle Linux, Fedora).
We’re only installing the OE schema here. I already have the HR schema installed on the database and I do not want to drop and re-create it.
If you want to install all of the demo schemas then you’ll need to check the instructions in the README.md file once you’ve downloaded the installation scripts.
Speaking of which…

Finding the demo scripts

As stated in the documentation, only the scripts for the HR schema are included in the Oracle software.
If you want the other schemas, you need to download them from Oracle’s GitHub repo.

Although we’re only interested in the OE schema at the moment, the source code is provided in a single archive file.
Download the zip for the appropriate database release ( 18c in my case) and we should now have a file looking something like :

-rw-rw-r-- 1 mike mike 28882177 Jun  9 17:03 db-sample-schemas-18c.zip
Pre-requisites for installing the OE schema

Before I run off and start playing with my new zip file, I should really check that I’ve got everything I need to ensure that the setup will go smoothly.
The pre-requisites are :

  • the HR schema must already be installed and unlocked in the PDB that you are installing into
  • Oracle Spatial must be enabled
  • the installation scripts need to run on the database server (or a filesystem visible to it)

To check that HR is already available in the PDB (xepdb1 in my case) :

alter session set container = xepdb1;

select username, account_status
from cdb_users
where username = 'HR'
and con_id = sys_context('userenv', 'con_id');

USERNAME   ACCOUNT_STATUS 
---------- ---------------
HR         OPEN           

If the query does not return any rows then you will need to install the HR schema. This can be done following the instructions in the aforementioned documentation.

NOTE – before you do this it’s a good idea to double check to make sure that you are in the correct container database :

select sys_context('userenv', 'con_name') from dual;

If the ACCOUNT_STATUS is LOCKED then you need to unlock the HR user as the OE creation script will attempt to connect to the database as HR. To do this, connect to the target PDB as a user with the ALTER USER privilege (e.g. SYSTEM) and run :

alter user hr account unlock;

User HR altered

As I’m on 18cXE, Oracle Spatial should be enabled. Thanks to Norman, I know that I can confirm this by running :

select parameter, value       
from v$option 
where regexp_like(parameter, 'spatial', 'i')
/

PARAMETER            VALUE     
-------------------- ----------
Spatial              TRUE      

Next, we need to upload the zip file to the Database Server (which I’ve done using sftp).

Preparing the Install scripts

Now it’s on the correct machine, I need to change the ownership to oracle…

sudo chown oracle:oinstall db-sample-schemas-18c.zip

ls -l db-sample-schemas-18c.zip 
-rw-rw-r--. 1 oracle oinstall 28882177 Jun 10 12:34 db-sample-schemas-18c.zip

…because I want to move it to the appropriate directory in ORACLE_HOME…

sudo mv db-sample-schemas-18c.zip $ORACLE_HOME/demo/schema
cd $ORACLE_HOME/demo/schema

…and extract it as oracle…

sudo su oracle
unzip db-sample-schemas-18c.zip

cd db-sample-schemas-18c

… and create a directory to hold the log file that’s output when we run the installation…

mkdir order_entry/logs

Now we’ve extracted the files, you may be tempted to have a quick peek at the code.

Initially you may be somewhat surprised. For example, in order_entry/oe_main.sql you can see :

DEFINE vscript = __SUB__CWD__/order_entry/coe_&vrs

Is __SUB__CWD__ some SQL*Plus magic that has eluded you all these years ?

Well, no. Looking that the README.md file, we can confirm that it’s simply a placeholder that we need to replace with a valid absolute path to the scripts.

Fortunately, the README.md file also contains the code required to achieve this.
Simply ensure that we’re in the db-sample-schemas-18c directory and run :

perl -p -i.bak -e 's#__SUB__CWD__#'$(pwd)'#g' *.sql */*.sql */*.dat 

After running this, we can see that the line in oe_main.sql now reads :

DEFINE vscript = /opt/oracle/product/18c/dbhomeXE/demo/schema/db-sample-schemas-18c/order_entry/coe_&vrs
Performing the installation

The main order entry script is in the order_entry sub-directory and is called oe_main.sql.
This script accepts 9 positional parameters :

  1. the password for the OE user that will be created as part of this installation
  2. the default tablespeace for OE
  3. the temporary tablespace for OE
  4. the password for the pre-existing HR user
  5. the SYS password
  6. the directory path for the data files (these are in the current directory as they are included in the zip file)
  7. the absolute path for the log directory we just created ( including a trailing slash)
  8. the version (it’s v3 in this case)
  9. the SQL*Plus connect string for the target database

The “version” the script asks for is appended to the basename of some of the scripts that will be run as part of the install.
To obtain this, simply run :

ls -1 *_v?.sql

ccus_v3.sql
cidx_v3.sql
cmnt_v3.sql
coe_v3.sql
cord_v3.sql
cwhs_v3.sql
doe_v3.sql
loe_v3.sql
pcus_v3.sql
poe_v3.sql
pord_v3.sql
pwhs_v3.sql

From this, I can see that the value of version that the script needs is “v3”.

Now, connect to the database via SQL*Plus as SYSTEM. Note that, as I’ve not setup an Oracle client on the server, I need to specify the server name, tns port and pdb name in the connect string :

sqlplus system@frea.virtualbox:1522/xepdb1

Finally, we can run the install script :

@oe_main.sql oe_user_password users temp hr_user_password sys_password . /opt/oracle/product/18c/dbhomeXE/demo/schema/db-sample-schemas-18c/order_entry/logs/ v3 frea.virtualbox:1522/xepdb1

The log file generated during the installation will contain everything that now whizzes up the screen.
You can find it after the run in the logs directory we created earlier :

ls -l logs
total 8
-rw-r--r--. 1 oracle oinstall 6244 Jun 10 13:50 oe_oc_v3.log

Even if there aren’t any particularly alarming errors in the logfile, it would be nice to verify that all has gone as intended.
First we can check that the OE schema now contains the expected number of objects of each type :

select object_type, 
    count(object_name) as "Number of objects"
from dba_objects
where owner = 'OE'
group by object_type
order by 2 desc;

OBJECT_TYPE          Number of objects
-------------------- -----------------
INDEX                               48
TYPE                                37
LOB                                 15
TABLE                               14
VIEW                                13
SYNONYM                              6
TRIGGER                              4
TYPE BODY                            3
SEQUENCE                             1
FUNCTION                             1

10 rows selected. 

We can also confirm that data has been loaded into the tables :

set serverout on size unlimited
declare
    l_count pls_integer;
begin
    for r_tab in (
        select dbms_assert.sql_object_name(owner||'.'||table_name) oe_table
        from dba_tables
        where owner = 'OE'
        and nested = 'NO'
        order by table_name)
    loop
        execute immediate 'select count(*)  from '||r_tab.oe_table 
            into l_count;
        dbms_output.put_line(upper(r_tab.oe_table)||' contains '||l_count||' records.');
    end loop;
end;
/

OE.CUSTOMERS contains 319 records.
OE.INVENTORIES contains 1112 records.
OE.ORDERS contains 105 records.
OE.ORDER_ITEMS contains 665 records.
OE.PRODUCT_DESCRIPTIONS contains 8640 records.
OE.PRODUCT_INFORMATION contains 288 records.
OE.PROMOTIONS contains 2 records.
OE.WAREHOUSES contains 9 records.


PL/SQL procedure successfully completed.

The Country may be descending into chaos but at least we’ve got a working OE schema to play around with.

Running Oracle JET as Progressive Web App

Andrejus Baranovski - Sat, 2019-06-15 09:44
Progressive Web Apps (PWA) topic is a hot thing in web development these days. Read more about it - Progressive Web Apps. The beauty and power behind PWA - user can install a web app to his mobile device, without going through the app store. This simplifies update process too, when a new version of the app is available, the user will get it straight away, because it is essentially a Web page, wrapped to look like an installed app.

Inspired by this post - A Simple Progressive Web App Tutorial, I decided to add PWA config into Oracle JET app and test how it works (on Android, didn't test on iOS, but there is nothing JET specific, if PWA is supported on iOS, it should work).

Oracle JET PWA sample app is deployed on Heroku (PWA will work only if the app is coming through HTTPS) and available under this URL. The sample app is available on GitHub repo. Node.js wrapper for this sample is available in another GitHub repo, you can use it to deploy on Heroku or another service.

Access JET app URL, the app will be loaded and you should see Android notification in the bottom. Google Chrome mobile browser automatically is recognizing PWA app by manifest and offers to "install" it by adding to the home screen:


Select notification and you will get a confirmation message:


Select "Add" and Web app will be added to the home screen. It will look like a real mobile app for the user. For example, the user could get runtime stats for the app, check storage and data usage metrics:


The app is added to the home screen (look for Oracle  JET icon):


Select the app icon and app will be opened. There is no URL address bar in the header and indeed it looks like a mobile app, not a Web page:


The app will be recognized as PWA, if certain config steps were implemented. One of them - you need to add manifest file (add it in Oracle JET in the same folder as index.html) and provide app icons, name, etc.:


The manifest file must be included through a reference in the app entry point - index page (Oracle JET index.html page for example):


In addition to manifest, the app must define a service worker (same as manifest file, you can create this file in the same directory as Oracle JET index.html). PWA doesn't only bring the visual experience of the real app to the Web application. You can define a cache store for the app files, this means next time when offline - app files will load from local cache storage, there will be no need to download them from the internet:


Service worker can be registered from main.js file where Oracle JET context is initialized on the application initial load. Add service worker registration at the bottom of main.js:


The idea of this post was to share a simple example of PWA for Oracle JET. This should help you to get started quickly with PWA support config for Oracle JET app.

HIUG Interact 2019 Conference Schedule

Jim Marion - Fri, 2019-06-14 11:15

Tomorrow I fly to Orlando for the 2019 HIUG Interact conference. I'm almost packed. I wanted to share my schedule with anyone attending. As usual, we start the conference off right with a session Sunday afternoon.

I am on site for the whole conference and don't leave until Thursday morning. I plan to be in sessions all day every day. I look forward to seeing you there!

Are you presenting? If so, add your session to the comments below

Trouble-shooting

Jonathan Lewis - Fri, 2019-06-14 03:19

Here’s an answer I’ve just offered on the ODC database forum to a fairly generic type of problem.

The question was about finding out why a “program” that used to take only 10 minutes to complete is currently taking significantly longer. The system is running Standard Edition, and the program runs once per day. There’s some emphasis on the desirability of taking action while the program is still running with the following as the most recent statement of the requirements:

We have a program which run daily 10minutes and suddenly one day,it is running for more than 10minutes…in this case,we are asked to look into the backend session to check what exactly the session is doing.I understand we have to check the events,last sql etc..but we need to get the work done by that session in terms of buffergets or physical reads(in case of standard edition)

1)In this case,we cannot enable trace to the session as it already started running and 10 minutes passed way.If we enable trace at this point,will it gives the required info?

2)To check the statistics of this specific session,what is the best way to proceed and this is my initial question.One of my friend pointed out to check v$sess_io and he is not sure whether it will return cumulative values because this view has only sid and not having serial#..and sid is not unique

So, answering the questions as stated, with a little bit of padding:

1) In this case, we cannot enable trace to the session as it already started running and 10 minutes passed way.If we enable trace at this point,will it gives the required info?

If the session has started and has been running for 10 minutes it’s still possible to force tracing into the session and, depending what the program does, you may be lucky enough to get enough information in the trace/tkprof file to help you. The “most-approved” way of doing this for a session is probably through a call to dbms_monitor.session_trace_enable(), but if that’s a package that Standard Edition is not licensed to use then there’s dbms_system.set_sql_trace_in_session().

If this doesn’t help, and if the session is still live and running, you could also look at v$open_cursor for that SID to see what SQL statements (sql_id, child_address, last_active_time and first 60 characters of the text) are still open for that session, then query v$sql for more details about those statements (using sql_id and child_address). The stats you find in those statements are accumulated across all executions by all sessions from the moment the cursor went into the library cache, but if this is a program that runs once per day then it’s possible that the critical statement will only be run by that one session, and the cursor from the previous day will have aged out of the library cache so that what you see is just today’s run.

Since you’re on Standard Edition and don’t have access to the AWR you should have installed Statspack – which gives you nearly everything that you can get from the AWR reports (the most significant difference being the absence of the v$active_session_history – but there are open-source emulators that you can install as a fairly good substitute for that functionality). If there is one statement in your program that does a lot of work then it’s possible that it might be one of the top N statements in a Statspack snapshot.

If this program is a known modular piece of code could you alter the mechanism that calls it to include calls to enable session tracing at the start of the program (and disable it, perhaps, at the end of the progam).  This might be by modifying the code directly, or by modifying the wrapper that drive the program, or by adding a logon trigger if there’s a mechanism that would allow Oracle to recognise the specific session that runs this particular program, or if something sets an identifiable (and unambiguous) module and/or action as part of calling the program then you could use the dbms_monitor package to enable tracing for (e.g.) a particular module and action on a permanent basis.

2) To check the statistics of this specific session,what is the best way to proceed and this is my initial question.One of my friend pointed out to check v$sess_io and he is not sure whether it will return cumulative values because this view has only sid and not having serial#..and sid is not unique

An answer: the stats are for one SID and SERIALl#, whether you’re looking at v$sess_io, v$sesstat, v$session_event, v$sess_time_model and any other v$sesXXX views that I can’t recall off-hand.  In passing, if you can add a wrapper to the calling code, capturing sessions activity stats (v$sesstat) wait time (v$session_event) and time model summary (v$sess_time_model) is a very useful debugging aid.

And an explanation: the “session” array is a fixed size array, and the SID is the array subscript of the entry your session acquired at logon. Since the array is fixed size Oracle has to keep re-using the array entries so each time it re-uses an array entry it increments the serial# so that (sid, serial#) becomes the unique identifier across history[1]. As it acquires the array entry it also resets/re-initializes all the other v$sesXXX arrays/linked list/structures.

The one thing to watch out for when you try to capture any of the session numbers is that you don’t query these v$sesXXX things twice to find a difference unless you also capture the serial# at the same time so that you can be sure that the first session didn’t log off and a second session log on and reuse the same SID between your two snapshots.  (It’s quite possible that this will be obvious even if you haven’t captured the serial#, because you may spot that some of the values that should only increase with time have decreased)

 

Footnote

[1] I think there was a time when restarting an instance would reset the serial# to 1 and the serial# has to wrap eventually and the wrap used to be at 65536 because it was stored as a two-byte number – which doesn’t seem to have changed.  Nowadays the serial# seems to be maintained across instance restart (I wonder how that works with carefully timed instance crashes), and given the amount of history that Oracle could maintain in the AWR I suppose there could be some scope for connect two pieces of history that were really from two different sessions.

 

Parallel Fun – 2

Jonathan Lewis - Thu, 2019-06-13 12:24

I started writing this note in March 2015 with the following introductory comment:

A little while ago I wrote a few notes about a very resource-intensive parallel query. One of the points I made about it was that it was easy to model, and then interesting to run on later versions of Oracle. So today I’m going to treat you to a few of the observations and notes I made after modelling the problem; and here’s the SQL to create the underlying objects:

Unfortunately I failed to do anything more with the model I had created until a few days ago (June 2019 – in case I stall again) when a related question came up on the ODC database forum. This time I’m ready to go a little further – so I’ll start with a bait-and-switch approach. Here are the first few lines (omitting the SQL) of an SQL Monitor report from an instance of 18.3 – is this a power-crazed machine or what ?


Global Information
------------------------------
 Status              :  DONE (ALL ROWS)
 Instance ID         :  1
 Session             :  TEST_USER (169:11324)
 SQL ID              :  73y5quma4jnw4
 SQL Execution ID    :  16777216
 Execution Started   :  06/13/2019 22:06:32
 First Refresh Time  :  06/13/2019 22:06:32
 Last Refresh Time   :  06/13/2019 22:07:03
 Duration            :  31s
 Module/Action       :  MyModule/MyAction
 Service             :  SYS$USERS
 Program             :  sqlplus@linux183.localdomain (TNS V1-V3)
 Fetch Calls         :  591

Global Stats
=========================================================================================
| Elapsed |   Cpu   |    IO    | Concurrency |  Other   | Fetch | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   | Waits(s) | Calls |  Gets  | Reqs | Bytes |
=========================================================================================
|      14 |    3.18 |     0.00 |        0.05 |       11 |   591 |  25978 |   62 |  13MB |
=========================================================================================

Parallel Execution Details (DOP=3 , Servers Allocated=6730)
==========================================================================================

It didn’t take long to run the query, only about 31 seconds. But the thing to notice in the report is that while the DOP is reported as 3, the number of “Servers Allocated” is a massive 6,730. So the big question – before I show you more of the report, explain what’s happening, and supply the code to build the model: how many PX processes did I actually start.

Here’s a little more of the output:


Parallel Execution Details (DOP=3 , Servers Allocated=6730)
==========================================================================================================================================================
|      Name      | Type  | Group# | Server# | Elapsed |   Cpu   |    IO    | Concurrency |  Other   | Buffer | Read | Read  |        Wait Events         |
|                |       |        |         | Time(s) | Time(s) | Waits(s) |  Waits(s)   | Waits(s) |  Gets  | Reqs | Bytes |         (sample #)         |
==========================================================================================================================================================
| PX Coordinator | QC    |        |         |      14 |    3.13 |          |        0.05 |       11 |  23727 |      |     . | PX Deq: Join ACK (5)       |
|                |       |        |         |         |         |          |             |          |        |      |       | PX Deq: Signal ACK EXT (2) |
|                |       |        |         |         |         |          |             |          |        |      |       | sql_id: 6405a2hc50bt4 (1)  |
| p004           | Set 1 |      1 |       1 |    0.00 |    0.00 |          |             |          |    180 |      |     . | library cache: mutex X (1) |
|                |       |        |         |         |         |          |             |          |        |      |       |                            |
| p005           | Set 1 |      1 |       2 |    0.00 |    0.00 |          |             |          |    100 |      |     . |                            |
| p006           | Set 1 |      1 |       3 |    0.00 |    0.00 |          |             |          |     90 |      |     . |                            |
| p000           | Set 1 |      2 |       1 |    0.01 |    0.01 |          |             |          |        |      |     . |                            |
| p001           | Set 1 |      2 |       2 |    0.02 |    0.02 |          |             |          |        |      |     . |                            |
| p002           | Set 2 |      2 |       1 |    0.01 |    0.01 |     0.00 |             |          |    944 |   32 |   7MB |                            |
| p003           | Set 2 |      2 |       2 |    0.01 |    0.01 |     0.00 |             |          |    937 |   30 |   7MB |                            |
==========================================================================================================================================================

Despite “allocating” 6,730 servers Oracle is only admitting to having used 7 of them -so let’s take a closer look at how they’re used. There are two groups, and we have one set of 3 slaves in group 1, and two sets of two slaves in group 2. (It looks to me as if the Group# and Type columns should be the other way around given the hierarchy of group / type / server#). We can understand a little more of what these numbers mean if we look at the execution plan – particularly the special columns relating to Data Flow Operations (DFOs) and “DFO trees”.


SQL Plan Monitoring Details (Plan Hash Value=3398913290)
========================================================================================================================================================================
| Id |          Operation           |   Name   |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Read | Read  |  Mem  | Activity |      Activity Detail       |
|    |                              |          | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs | Bytes | (Max) |   (%)    |        (# samples)         |
========================================================================================================================================================================
|  0 | SELECT STATEMENT             |          |         |      |        32 |     +0 |     1 |     8846 |      |       |     . |     2.70 | Cpu (1)                    |
|  1 |   FILTER                     |          |         |      |        32 |     +0 |     1 |     8846 |      |       |     . |     5.41 | PX Deq: Signal ACK EXT (2) |
|  2 |    PX COORDINATOR            |          |         |      |        32 |     +0 |     5 |     8846 |      |       |     . |          |                            |
|  3 |     PX SEND QC (RANDOM)      | :TQ20002 |    9146 |  128 |        29 |     +2 |     2 |     8846 |      |       |     . |          |                            |
|  4 |      HASH JOIN BUFFERED      |          |    9146 |  128 |        29 |     +2 |     2 |     8846 |      |       |   9MB |          |                            |
|  5 |       PX RECEIVE             |          |    8846 |   11 |        14 |     +2 |     2 |     8846 |      |       |     . |          |                            |
|  6 |        PX SEND HYBRID HASH   | :TQ20000 |    8846 |   11 |         1 |     +0 |     2 |     8846 |      |       |     . |          |                            |
|  7 |         STATISTICS COLLECTOR |          |         |      |         1 |     +0 |     2 |     8846 |      |       |     . |          |                            |
|  8 |          PX BLOCK ITERATOR   |          |    8846 |   11 |         1 |     +0 |     2 |     8846 |      |       |     . |          |                            |
|  9 |           TABLE ACCESS FULL  | T2       |    8846 |   11 |         1 |     +0 |    23 |     8846 |   24 |   1MB |     . |          |                            |
| 10 |       PX RECEIVE             |          |   50000 |  116 |        14 |     +2 |     2 |     2509 |      |       |     . |          |                            |
| 11 |        PX SEND HYBRID HASH   | :TQ20001 |   50000 |  116 |         1 |     +0 |     2 |     2509 |      |       |     . |          |                            |
| 12 |         PX BLOCK ITERATOR    |          |   50000 |  116 |         1 |     +0 |     2 |     2509 |      |       |     . |          |                            |
| 13 |          TABLE ACCESS FULL   | T1       |   50000 |  116 |         1 |     +0 |    26 |     2509 |   38 |  12MB |     . |          |                            |
| 14 |    PX COORDINATOR            |          |         |      |        31 |     +1 |  8978 |     2252 |      |       |     . |    13.51 | PX Deq: Join ACK (5)       |
| 15 |     PX SEND QC (RANDOM)      | :TQ10000 |       1 |   77 |        32 |     +0 |  6667 |     3692 |      |       |     . |          |                            |
| 16 |      PX BLOCK ITERATOR       |          |       1 |   77 |        32 |     +0 |  6667 |    92478 |      |       |     . |     2.70 | Cpu (1)                    |
| 17 |       TABLE ACCESS FULL      | T3       |       1 |   77 |        32 |     +0 | 53118 |    92478 |   32 |   8MB |     . |    67.57 | Cpu (25)                   |
========================================================================================================================================================================

The “Name” column shows us that we have two DFO trees (:TQ2nnnn, and :TQ1nnnn) – this is why we see two “groups” in PX server detail, and why those groups can have difference deggrees of parallelism.

Looking at the general shape of the plan you can see that operation 1 is a FILTER operation with two child operations, one at operation 2 the other at operation 14. So we probably have a filter subquery in place operated as DFO tree #1 while the main query is operated as DFO tree #2. This means the main query is running at DOP = 2 (it’s a hash join with hash distribution so it needs two sets of slave processes so all the details agree with what we’ve learned abaout Group# 2 above); and the subquery is operating a DOP = 3 – and it’s using only one set of slave processes.

There is a little anomaly in the number of Execs of operation 14 – at some point I will examine this more closely, but it might simply be a reporting error that has added the number of Execs of its child operations to its own Execs, it might be something to do with counting in Exec calls by its parent, it might be a side effect of scalar subquery caching. I’ll worry about it when I have a good reason to do so. What I want to look at is the Execs of operations 15/16, the PX Block Iterator / PX Send QC. There are 6,667 reports of PX slave executing, and that matches up quite nicely with the 6,730 reported “Servers Allocated” – so it would appear that Oracle says it’s allocating a server whenever it uses a server. But does it really “allocate” (and, presumably, de-allocate).

Here’s how you find out – you run the query again, taking various snapshot and looking for cross-references. I’ve got some results from v$pq_tqstat and v$pq_slace for the run that produced the SQL Monitor report above, and some of the QC session stats and enqueue stats for a subsequent run. This is what we see:


select  process, count(*) 
from    v$pq_tqstat 
group by 
        process 
order by 
        process
;


PROCESS                    COUNT(*)
------------------------ ----------
P000                              3
P001                              3
P002                              2
P003                              2
P004                           2225
P005                           2214
P006                           2218
QC                             2243


SQL> select slave_name, sessions from V$pq_slave order by slave_name;

SLAV   SESSIONS
---- ----------
P000          1
P001          1
P002          1
P003          1
P004       2242
P005       2242
P006       2242

Key Session Stats
=================
Name                                                                         Value                                                                          
----                                                                         -----                                                                          
opened cursors cumulative                                                    6,955                                                                          
user calls                                                                  20,631                                                                          
recursive calls                                                             20,895                                                                          
enqueue requests                                                            22,699                                                                          
enqueue conversions                                                         13,610                                                                          
enqueue releases                                                            15,894                                                                          
parse count (total)                                                          6,857                                                                          
execute count                                                                6,966                                                                          
DFO trees parallelized                                                           2
Parallel operations not downgraded                                           2,268

Key Enqueue Stats
=================
Type Short name                   Requests       Waits     Success      Failed    Wait m/s                                                                  
---- ----------                   --------       -----     -------      ------    --------                                                                  
DA   Slave Process Array             2,272          13       2,272           0          43                                                                  
PS   contention                     27,160       1,586      27,080           7         415                                                                  
SE   contention                      6,784           0       6,785           0           0                                                                  

TYPE                 DESCRIPTION
-------------------- ------------------------------------------------------------------------
PS                   Parallel Execution Server Process reservation and synchronization
DA                   Slave Process Spawn reservation and synchronization
SE                   Lock used by transparent session migration

Oracle really did start and stop something like 6,700 PX session (constantly re-using the same small set of PX slave processes) for each execution of the filter subquery. This is definitely a performance threat – we keep acquiring and releasing PX slaves, we keep creating new sessions (yes, really), and we keep searching for cursors in the library cache. All these activities are highly contentious. If you start running multiple queries that do this sort of thing you find that you see increasing amounts of time being spent on latch contention, PX slave allocation, mutex waits, and all the other problems you get with sessions that log on, do virtually nothing, then log off in rapid succession.

So how do you write SQL that does this type of thing. Here’s my data model (you may want to limit the number of rows in the tables:


create table t1 as
select * from all_source;

create table t2 as
select * from all_source where mod(line,20) = 1;

create table t3 as
select * from all_source;

And here’s all you have to do to start creating problems – I’ve added explicit hints to force parallelism (particularly for the subquery), it’s more likely that it has been introduced accidentally by table or index definitions, or by an “alter session” to “force parallel”:


set feedback only

select
        /*+ 
                parallel(t1 2) 
                parallel(t2 2)
                leading(t1 t2)
                use_hash(t2)
                swap_join_inputs(t2)
                pq_distribute(t2 hash hash)
                cardinality(t1,50000)
        */
        t1.owner,
        t1.name,
        t1.type
from
        t1
join
        t2
on      t2.owner = t1.owner
and     t2.name = t1.name
and     t2.type = t1.type
where
        t1.line = 1
and     (
           mod(length(t1.text), 10) = 0
        or exists (
                select --+ parallel(t3 3) 
                        null
                from    t3
                where   t3.owner = t1.owner
                and     t3.name = t1.name
                and     t3.type = t1.type
                and     t3.line >= t1.line
                )
        )
;

set feedback on

I’ve written notes in the past about SQL that forces the optimizer to run subqueries as filter subqueries instead of unnesting them – this is just an example of that type of query, pushed into parallelism. It’s not the only way (see comment #1 from Dmitry Remizov below) to end up with scalar subqueries being executed many times as separate DFO trees even though Oracle has enhanced the optimizer several times over the years in ways that bypass the threat – but the probalm can still appear and it’s important to notice in development that you’ve got a query that Oracle can’t work around.

 

Using Cloud Native Buildpacks (CNB) on a local registry to speed up the building of images for test purposes

Pas Apicella - Tue, 2019-06-11 20:58
I previously blogged about the CNCF project known as Cloud Native Buildpacks previously on this blog entry below.

Building PivotalMySQLWeb using Cloud Native Buildpacks (CNB)
http://theblasfrompas.blogspot.com/2019/06/building-pivotalmysqlweb-using-cloud.html

In the steps below I will show how to use a local docker registry on your laptop or desktop to enable faster builds of your OCI compliant images using CNB's. Here is how using the same application.

Pre Steps:

1. Ensure you have Docker CE installed if not use this link

  https://hub.docker.com/search/?type=edition&offering=community

Steps:

1. Start by running a local registry on your own laptop. The guide shows how to get a container running which will be our local registry and then how you verify it's running.

https://docs.docker.com/registry/

$ docker run -d -p 5000:5000 --restart=always --name registry registry:2

Verify it's running:

$ netstat -an | grep 5000
tcp6       0      0  ::1.5000               *.*                    LISTEN
tcp4       0      0  *.5000                 *.*                    LISTEN

2. Then pull the CNB images versions of the "official" build and run images from the GCR as follows. Those images exist here

https://console.cloud.google.com/gcr/images/cncf-buildpacks-ci/GLOBAL/packs/run?gcrImageListsize=30

Here I am using the latest build/run images which at the time of this post was "run:0.2.0-build.12"

papicella@papicella:~$ docker pull gcr.io:443/cncf-buildpacks-ci/packs/run:0.2.0-build.12
0.2.0-build.12: Pulling from cncf-buildpacks-ci/packs/run
Digest: sha256:ebd42c0228f776804f2e99733076216592c5a1117f1b3dde7688cf3bd0bbe7b9
Status: Downloaded newer image for gcr.io:443/cncf-buildpacks-ci/packs/run:0.2.0-build.12

papicella@papicella:~$ docker tag gcr.io:443/cncf-buildpacks-ci/packs/run:0.2.0-build.12 localhost:5000/run:0.2.0-build.12

papicella@papicella:~$ docker rmi gcr.io:443/cncf-buildpacks-ci/packs/run:0.2.0-build.12
Untagged: gcr.io:443/cncf-buildpacks-ci/packs/run:0.2.0-build.12
Untagged: gcr.io:443/cncf-buildpacks-ci/packs/run@sha256:ebd42c0228f776804f2e99733076216592c5a1117f1b3dde7688cf3bd0bbe7b9

papicella@papicella:~$ docker push localhost:5000/run:0.2.0-build.12
The push refers to repository [localhost:5000/run]
1315c94f2536: Layer already exists
63696cbb6c17: Layer already exists
30ede08f8231: Layer already exists
b57c79f4a9f3: Layer already exists
d60e01b37e74: Layer already exists
e45cfbc98a50: Layer already exists
762d8e1a6054: Layer already exists
0.2.0-build.12: digest: sha256:ebd42c0228f776804f2e99733076216592c5a1117f1b3dde7688cf3bd0bbe7b9 size: 1780

3. Now lets use our local registry and build/run images which will be much faster for local development

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ docker tag localhost:5000/run:0.2.0-build.12 localhost:5000/run

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ docker push localhost:5000/run:latest
The push refers to repository [localhost:5000/run]
1315c94f2536: Layer already exists
63696cbb6c17: Layer already exists
30ede08f8231: Layer already exists
b57c79f4a9f3: Layer already exists
d60e01b37e74: Layer already exists
e45cfbc98a50: Layer already exists
762d8e1a6054: Layer already exists
latest: digest: sha256:ebd42c0228f776804f2e99733076216592c5a1117f1b3dde7688cf3bd0bbe7b9 size: 1780

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ pack build localhost:5000/pivotal-mysql-web --path ./PivotalMySQLWeb --no-pull --publish
Using default builder image cloudfoundry/cnb:cflinuxfs3
Selected run image cloudfoundry/cnb-run:cflinuxfs3 from builder
Using build cache volume pack-cache-65bb470893c1.build
Executing lifecycle version 0.2.1
===> DETECTING
[detector] Trying group 1 out of 4 with 8 buildpacks...
[detector] ======== Results ========
[detector] pass: Cloud Foundry OpenJDK Buildpack
[detector] skip: Cloud Foundry Build System Buildpack
[detector] pass: Cloud Foundry JVM Application Buildpack
[detector] skip: Cloud Foundry Azure Application Insights Buildpack
[detector] skip: Cloud Foundry Debug Buildpack
[detector] skip: Cloud Foundry Google Stackdriver Buildpack
[detector] skip: Cloud Foundry JMX Buildpack
[detector] skip: Cloud Foundry Procfile Buildpack
===> RESTORING
[restorer] restoring cached layer 'org.cloudfoundry.openjdk:d2df8bc799b09c8375f79bf646747afac3d933bb1f65de71d6c78e7466ff8fe4'
===> ANALYZING
[analyzer] using cached layer 'org.cloudfoundry.openjdk:d2df8bc799b09c8375f79bf646747afac3d933bb1f65de71d6c78e7466ff8fe4'
[analyzer] writing metadata for uncached layer 'org.cloudfoundry.openjdk:openjdk-jre'
[analyzer] writing metadata for uncached layer 'org.cloudfoundry.jvmapplication:main-class'
===> BUILDING
[builder] -----> Cloud Foundry OpenJDK Buildpack 1.0.0-M8
[builder] -----> OpenJDK JRE 11.0.3: Reusing cached layer
[builder]
[builder] -----> Cloud Foundry JVM Application Buildpack 1.0.0-M8
[builder] -----> Main-Class Classpath: Reusing cached layer
[builder] -----> Process types:
[builder]        task: java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[builder]        web:  java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[builder]
===> EXPORTING
[exporter] Reusing layer 'app' with SHA sha256:b32618ed6b86fb496a4ce33db9df49fdd4ef16c5646b174b5643c8befcb7408a
[exporter] Reusing layer 'config' with SHA sha256:9538e967fa10f23b3415c382a3754ebf4c2645c20b6d76af519236c1181e7639
[exporter] Reusing layer 'launcher' with SHA sha256:04ca7957074763290a9abe6a067ce8c902a2ab51ed6c55102964e3f3294cdebd
[exporter] Reusing layer 'org.cloudfoundry.openjdk:openjdk-jre' with SHA sha256:e540f1464509ac673a25bd2f24c7dd6875f805c0dd35e9af84dd4669e2fd0c93
[exporter] Reusing layer 'org.cloudfoundry.jvmapplication:main-class' with SHA sha256:8537197b3f57d86a59397b89b4fbdd14900a602cc12961eae338b9ef2513cdc0
[exporter]
[exporter] *** Image: localhost:5000/pivotal-mysql-web:latest@sha256:f1d7a25fc5159ceb668c26b595dcffb00ef54ada31cbb52eaa8319dc143fc9d8
===> CACHING
[cacher] Reusing layer 'org.cloudfoundry.openjdk:d2df8bc799b09c8375f79bf646747afac3d933bb1f65de71d6c78e7466ff8fe4' with SHA sha256:11439713b023be71211cb83ecd56a1be63e0c0be3e4814a18cc4c71d2264dea5
Successfully built image localhost:5000/pivotal-mysql-web

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ docker pull localhost:5000/pivotal-mysql-web
Using default tag: latest
latest: Pulling from pivotal-mysql-web
410238d178d0: Already exists
a00e90b544bc: Already exists
9de264eecc08: Already exists
4acedf754175: Already exists
d5a72fc0c7a1: Already exists
4066d2d744ac: Already exists
dba1ef680b99: Already exists
Digest: sha256:f1d7a25fc5159ceb668c26b595dcffb00ef54ada31cbb52eaa8319dc143fc9d8
Status: Downloaded newer image for localhost:5000/pivotal-mysql-web:latest

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ docker run -m 1G -p 8080:8080 localhost:5000/pivotal-mysql-web

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.1.0.RELEASE)

2019-06-12 01:02:16.174  INFO 1 --- [           main] c.p.p.m.PivotalMySqlWebApplication       : Starting PivotalMySqlWebApplication on a018f17d6121 with PID 1 (/workspace/BOOT-INF/classes started by vcap in /workspace)
2019-06-12 01:02:16.179  INFO 1 --- [           main] c.p.p.m.PivotalMySqlWebApplication       : No active profile set, falling back to default profiles: default
2019-06-12 01:02:18.336  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8080 (http)
2019-06-12 01:02:18.374  INFO 1 --- [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
2019-06-12 01:02:18.375  INFO 1 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet Engine: Apache Tomcat/9.0.12
2019-06-12 01:02:18.391  INFO 1 --- [           main] o.a.catalina.core.AprLifecycleListener   : The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/layers/org.cloudfoundry.openjdk/openjdk-jre/lib:/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib]
2019-06-12 01:02:18.512  INFO 1 --- [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2019-06-12 01:02:18.512  INFO 1 --- [           main] o.s.web.context.ContextLoader            : Root WebApplicationContext: initialization completed in 2270 ms
2019-06-12 01:02:19.019  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'characterEncodingFilter' to: [/*]
2019-06-12 01:02:19.020  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'webMvcMetricsFilter' to: [/*]
2019-06-12 01:02:19.020  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
2019-06-12 01:02:19.020  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'formContentFilter' to: [/*]
2019-06-12 01:02:19.021  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'requestContextFilter' to: [/*]
2019-06-12 01:02:19.021  INFO 1 --- [           main] .s.DelegatingFilterProxyRegistrationBean : Mapping filter: 'springSecurityFilterChain' to: [/*]
2019-06-12 01:02:19.022  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'httpTraceFilter' to: [/*]
2019-06-12 01:02:19.022  INFO 1 --- [           main] o.s.b.w.servlet.ServletRegistrationBean  : Servlet dispatcherServlet mapped to [/]
2019-06-12 01:02:19.374  INFO 1 --- [           main] o.s.s.concurrent.ThreadPoolTaskExecutor  : Initializing ExecutorService 'applicationTaskExecutor'
2019-06-12 01:02:19.918  INFO 1 --- [           main] .s.s.UserDetailsServiceAutoConfiguration :

Using generated security password: 42d4ec01-6459-4205-a66b-1b49d333121e

2019-06-12 01:02:20.043  INFO 1 --- [           main] o.s.s.web.DefaultSecurityFilterChain     : Creating filter chain: Ant [pattern='/**'], []
2019-06-12 01:02:20.092  INFO 1 --- [           main] o.s.s.web.DefaultSecurityFilterChain     : Creating filter chain: any request, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@47e4d9d0, org.springframework.security.web.context.SecurityContextPersistenceFilter@5e4fa1da, org.springframework.security.web.header.HeaderWriterFilter@4ae263bf, org.springframework.security.web.csrf.CsrfFilter@2788d0fe, org.springframework.security.web.authentication.logout.LogoutFilter@15fdd1f2, org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter@2d746ce4, org.springframework.security.web.authentication.ui.DefaultLoginPageGeneratingFilter@70e02081, org.springframework.security.web.authentication.ui.DefaultLogoutPageGeneratingFilter@49798e84, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@1948ea69, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@3f92c349, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@66ba7e45, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6ed06f69, org.springframework.security.web.session.SessionManagementFilter@19ccca5, org.springframework.security.web.access.ExceptionTranslationFilter@57aa341b, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@7c6442c2]
2019-06-12 01:02:20.138  INFO 1 --- [           main] o.s.b.a.e.web.EndpointLinksResolver      : Exposing 9 endpoint(s) beneath base path '/actuator'
2019-06-12 01:02:20.259  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2019-06-12 01:02:20.265  INFO 1 --- [           main] c.p.p.m.PivotalMySqlWebApplication       : Started PivotalMySqlWebApplication in 4.841 seconds (JVM running for 5.646)



And that's it a locally built OCI image (Built very fast all locally) you have run locally from your local image registry.

Here is how to view your local registry using HTTPie showing our locally built "pivotal-mysql-web" OCI image we created above

papicella@papicella:~$ http http://localhost:5000/v2/_catalog
HTTP/1.1 200 OK
Content-Length: 63
Content-Type: application/json; charset=utf-8
Date: Wed, 12 Jun 2019 01:53:40 GMT
Docker-Distribution-Api-Version: registry/2.0
X-Content-Type-Options: nosniff

{
    "repositories": [
        "pivotal-mysql-web",
        "run",
        "sample-java-app"
    ]
}


More Information

1. Cloud Native Buildpacks: an Industry-Standard Build Process for Kubernetes and Beyond.
https://content.pivotal.io/blog/cloud-native-buildpacks-for-kubernetes-and-beyond

2. buildspacks.io Home Page
https://buildpacks.io/

Categories: Fusion Middleware

RAMBleed DRAM Vulnerabilities

Oracle Security Team - Tue, 2019-06-11 12:00

On June 11th, security researchers published a paper titled “RAMBleed Reading Bits in Memory without Accessing Them”.  This paper describes attacks against Dynamic Random Access Memory (DRAM) modules that are already susceptible to Rowhammer-style attacks.

The new attack methods described in this paper are not microprocessor-specific, they leverage known issues in DRAM memory.  These attacks only impact DDR4 and DDR3 memory modules, and older generations DDR2 and DDR1 memory modules are not vulnerable to these attacks.

While the RAMBleed issues leverage RowHammer, RAMBleed is different in that confidentiality of data may be compromised: RAMBleed uses RowHammer as a side channel to discover the values of adjacent memory. 

Please note that successfully leveraging RAMBleed exploits require that the malicious attacker be able to locally execute malicious code against the targeted system. 

At this point in time, Oracle believes that:

  • All current and many older families of Oracle x86 (X5, X6, X7, X8, E1) and Oracle SPARC servers (S7, T7, T8, M7, M8) employing DDR4 DIMMs are not expected to be impacted by RAMBleed.  This is because Oracle only employs DDR4 DIMMs that have implemented the Target Row Refresh (TRR) defense mechanism against RowHammer.  Oracle’s memory suppliers have stated that these implementations have been designed to be effective against RowHammer. 
  • Older systems making use of DDR3 memory are also not expected to be impacted by RAMBleed because they are making use of a combination of other RowHammer mitigations (e.g., pseudo-TRR and increased DIMM refresh rates in addition to Error-Correcting Code (ECC)).  Oracle is currently not aware of any research that would indicate that the combination of these mechanisms would not be effective against RAMBleed. 
  • Oracle Cloud Infrastructure (OCI) is not impacted by the RAMBleed issues because OCI servers only use DDR4 memory with built-in defenses as previously described.  Exadata Engineered Systems use DDR4 memory (X5 family and newer) and DDR3 memory (X4 family and older).
  • Finally, Oracle does not believe that additional software patches will need to be produced to address the RAMBleed issues, as these memory issues can be only be addressed through hardware configuration changes.  In other words, no additional security patches are expected for Oracle product distributions.
For more information about Oracle Corporate Security Practices, see https://www.oracle.com/corporate/security-practices/

See APEX Debug info in PL/SQL and SQL Developer

Dimitri Gielis - Tue, 2019-06-11 07:05
When developping Oracle APEX apps I like to instrument my code with APEX_DEBUG and/or Logger.

With our APEX Office Print (AOP) PL/SQL API and APEX Plug-in we did the same, we use APEX_DEBUG and Logger behind the scenes to allow you to see what it going on. But when I tried to view the APEX debug messages in SQL Developer, I didn't see any. Christian Neumueller of the APEX Dev team, gave me the answer: APEX debug is buffering it's output, to reduce the I/O overhead. Buffering is disabled for LEVEL9, but the other levels only write:
 - after 1000 records
 - at the end of request processing
 - when you detach the session

This explained perfectly what was happening for me.

Here's a quick example when you want to debug the AOP PL/SQL API from PL/SQL and SQL Developer. The code downloads an Interactive Report of page 200 to Excel and stores it in a table.

declare
l_return blob;
l_output_filename varchar2(100) := 'output';
begin
apex_session.create_session(p_app_id=>498,p_page_id=>200,p_username=>'DIMI');

apex_debug.enable(p_level => apex_debug.c_log_level_info); 
-- for more details, use: c_log_level_app_trace
apex_debug.message(p_message => 'Debug enabled.');

l_return := aop_api_pkg.plsql_call_to_aop (
p_data_type => aop_api_pkg.c_source_type_rpt,
p_data_source => 'report1',
p_template_type => aop_api_pkg.c_source_type_apex,
p_template_source => 'aop_template_ir_customers.xlsx',
p_output_type => aop_api_pkg.c_excel_xlsx,
p_output_filename => l_output_filename,
p_aop_url => apex_app_setting.get_value('AOP_URL'),
p_api_key => apex_app_setting.get_value('AOP_API_KEY'),
p_app_id => 498,
p_page_id => 200);

insert into aop_output (output_blob,filename) 
values (l_return, l_output_filename);
commit;

dbms_output.put_line('To view debug messages:');
dbms_output.put_line('select * from apex_debug_messages where session_id = '
||apex_util.get_session_state('APP_SESSION') ||' order by message_timestamp');

apex_session.detach;
end;

Running the SQL statement to view the debug messages:
select * from apex_debug_messages where session_id = 16458652970080 order by message_timestamp

Et voila... the APEX debug info is available straight away :)


Categories: Development

Redo Dumps

Jonathan Lewis - Tue, 2019-06-11 06:53

A thread started on the Oracle-L list-server a few days ago asking for help analysing a problem where a simple “insert values()” (that handled millions of rows per day) was running very slowly. There are many reasons why this might happen, ranging from the trivial (someone has locked the table in exclusive mode), through the slightly subtle (we’re trying to insert a row that collides on a uniqueness constraint with an uncommitted insert from another session) to the subtle (Oracle has to read through the undo to check current versions of blocks against read-consistent versions) ending up at the esoteric (the ASSM space management blocks are completely messed up again).

A 10046 trace of a session doing an insert showed only that there was a lot of time spent on single block reads. Unfortunately, since this was on an Exadata system the waits were reported as “cell single block physical read”. Unfortunately the parameters to this wait event are “cellhash#”, “diskhash#”, and “bytes” and we don’t see the file_id, block_id which can be very helpful for a case like this. The only information we got from the trace file was that the object_id was for the table were rows were being inserted.

Before digging into exotic debugging methods, the OP supplied us with a 1-second session report from Tanel Poder’s snapper script:


------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>  SID @INST, USERNAME  , TYPE, STATISTIC                                                 ,         DELTA, HDELTA/SEC,    %TIME, GRAPH       , NUM_WAITS,  WAITS/SEC,   AVERAGES
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>   1070  @2, SYSADMIN, STAT, session logical reads                                     ,         13865,      7.73k,         ,             ,          ,           ,      14.1k total buffer visits
>   1070  @2, SYSADMIN, STAT, user I/O wait time                                        ,           141,      78.65,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, non-idle wait time                                        ,           141,      78.65,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, non-idle wait count                                       ,         12230,      6.82k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, physical read total IO requests                           ,          6112,      3.41k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, physical read requests optimized                          ,          6111,      3.41k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, physical read total bytes optimized                       ,      50069504,     27.93M,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, physical read total bytes                                 ,      50069504,     27.93M,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, cell physical IO interconnect bytes                       ,      50069504,     27.93M,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, gcs messages sent                                         ,             3,       1.67,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, db block gets                                             ,         13860,      7.73k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, db block gets from cache                                  ,         13860,      7.73k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, db block gets from cache (fastpath)                       ,          7737,      4.32k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, consistent gets                                           ,             1,        .56,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, consistent gets from cache                                ,             1,        .56,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, consistent gets pin                                       ,             1,        .56,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, consistent gets pin (fastpath)                            ,             1,        .56,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, logical read bytes from cache                             ,     113541120,     63.34M,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, physical reads                                            ,          6111,      3.41k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, physical reads cache                                      ,          6111,      3.41k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, physical read IO requests                                 ,          6112,      3.41k,         ,             ,          ,           ,      8.19k bytes per request
>   1070  @2, SYSADMIN, STAT, physical read bytes                                       ,      50069504,     27.93M,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, db block changes                                          ,            11,       6.14,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, free buffer requested                                     ,          6112,      3.41k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, hot buffers moved to head of LRU                          ,           958,     534.39,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, free buffer inspected                                     ,          6144,      3.43k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, shared hash latch upgrades - no wait                      ,             7,        3.9,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, blocks decrypted                                          ,          6110,      3.41k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, redo entries                                              ,          6120,      3.41k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, redo size                                                 ,        465504,    259.67k,         ,             ,          ,           ,          ~ bytes per user commit
>   1070  @2, SYSADMIN, STAT, redo entries for lost write detection                     ,          6110,      3.41k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, redo size for lost write detection                        ,        464756,    259.25k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, redo subscn max counts                                    ,             7,        3.9,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, file io wait time                                         ,       1408659,    785.78k,         ,             ,          ,           ,   230.47us bad guess of IO wait time per IO request
>   1070  @2, SYSADMIN, STAT, gc current blocks received                                ,             3,       1.67,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, gc local grants                                           ,          6116,      3.41k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, ASSM cbk:blocks examined                                  ,         12366,       6.9k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, ASSM gsp:L1 bitmaps examined                              ,          2478,      1.38k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, ASSM gsp:L2 bitmaps examined                              ,             1,        .56,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, ASSM gsp:reject db                                        ,         12388,      6.91k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, buffer is pinned count                                    ,           230,      128.3,         ,             ,          ,           ,       1.63 % buffer gets avoided thanks to buffer pin caching
>   1070  @2, SYSADMIN, STAT, cell flash cache read hits                                ,          6723,      3.75k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, TIME, background cpu time                                       ,        365192,   203.71ms,    20.4%, [##        ],          ,           ,
>   1070  @2, SYSADMIN, TIME, background elapsed time                                   ,       1273623,   710.45ms,    71.0%, [########  ],          ,           ,      28.95 % unaccounted time
>   1070  @2, SYSADMIN, WAIT, gc current block busy                                     ,           629,   350.87us,      .0%, [          ],         3,       1.67,   209.67us average wait
>   1070  @2, SYSADMIN, WAIT, cell single block physical read                           ,       1557638,   868.88ms,    86.9%, [WWWWWWWWW ],      6746,      3.76k,    230.9us average wait
>  
> --  End of Stats snap 1, end=2019-05-18 12:58:58, seconds=1.8

My first step was simply to read down the list (using a very small font to get the entire width on screen without wrapping) to see if anything stood out as unusual. The report showed two things I rarely see in the session stats:


blocks decrypted                                          ,          6110
redo entries for lost write detection                     ,          6110

These stats tell me that there are two “uncommon” features enabled: db_lost_write_protect, and block level encryption. (So whatever else is going on it’s just possible that mixing in two rarely used – and therefore less frequently tested – features may be confusing the issue.

Lost write protection means Oracle writes a “block read record” (BRR) to the redo log every time it reads a block from disc, so I decided to follow up the 6,110 figure to see what other stats reported similar values.


physical read total IO requests                           ,          6112
physical read requests optimized                          ,          6111
physical reads                                            ,          6111
physical reads cache                                      ,          6111
physical read IO requests                                 ,          6112
free buffer requested                                     ,          6112
redo entries                                              ,          6120
redo entries for lost write detection                     ,          6110
gc local grants                                           ,          6116
cell flash cache read hits                                ,          6723

There’s nothing particularly surprising here – basically we see all the blocks being read as single block reads, into cache. All the necessary global cache (gc) grants are local so it’s possible the table of interest has been remastered to this node. The value for “cell flash cache read hits” look a little odd as the cache is hit more frequently than blocks are read – but dynamic performance views are not read-consistent and this session is hammering away like crazy so this might just be a side effect of the time to gather the data for the report.

We can chase the redo a little further – the number of redo entries is slightly larger than the number of blocks read, so (even though small inconsistencies are not necessarily meaningful) this might tell us something:


redo entries                                              ,          6120
redo size                                                 ,        465504
redo entries for lost write detection                     ,          6110
redo size for lost write detection                        ,        464756 
db block changes                                          ,            11

The number of “redo entries” that were NOT for lost write detection is 10, totalling 748 bytes (not a lot – so indicative of “non-user” activity). The number of “db block changes” is 11 (close enough to 10), and generally it’s changes to db blocks that require redo to be generated. The final significant number is the one that isn’t there – there’s no undo generated, so no user-change to data. This system is working like crazy achieving absolutely nothing at this point.

The next point to ponder is what sort of work it is doing – so let’s check how the physical reads turn into buffer gets.


session logical reads                                     ,         13865
db block gets                                             ,         13860
db block gets from cache                                  ,         13860
db block gets from cache (fastpath)                       ,          7737
consistent gets                                           ,             1
consistent gets from cache                                ,             1
consistent gets pin                                       ,             1
consistent gets pin (fastpath)                            ,             1
hot buffers moved to head of LRU                          ,           958

buffer is pinned count                                    ,           230 

The unusual thing you notice with these figures is that virtually every buffer get is a current get. We’ve also got a number of blocks pinned – this might just be the segment header block, or the segment header and level 2 bitmap block that we keep revisiting. Finally we can see a lot of hot buffers being moved to the head of the LRU; since our session has been doing a lot of work for a long time it seems likely that those buffers are ones that our session is keeping hot – and for a big insert that shouldn’t really be happening unless, perhaps, we were managing to do a lot of maintenance of (well-clustered) indexes.

I’ve isolated the (new in 12.2) “ASSM gsp (get space)” statistics from this output – they’re all about handling blocks, but I wanted to look at them without being distracted by other stats.


ASSM cbk:blocks examined                                  ,         12366 
ASSM gsp:L1 bitmaps examined                              ,          2478 
ASSM gsp:L2 bitmaps examined                              ,             1 
ASSM gsp:reject db                                        ,         12388

We can see that we’ve examined 2,478 “level 1” bitmap blocks. A level 1 block holds the basic “bitmap” that records the state of a number of data blocks (typically 128 blocks once the object gets very large) so our session has worked its way through 2,478 maps trying to find a data block that it could use to insert a row. The “reject db” statistic tells us about data blocks that have been examined and rejected (presumably because the row we want to insert is too large to fit, or maybe because there are no free ITL (interested transaction list) entries available in the block). So we seem to be spending all our time searching for somewhere to insert rows. This shouldn’t really be happening – it’s a type of problem that Oracle has been worrying away at for quite some time: how do you avoid “losing” space by updating bitmap blocks too soon on inserts without going to the opposite extreme and leaving bitmap blocks that claim the space is free when it’s in use by uncommitted transactions.

Note to self: I don’t know how we managed to reject more blocks (12,388) than we’ve examined (12,366) but possibly it’s just one of those timing glitches (the error is less than one fifth of one percent) , possibly it’s something to do with the reject count including some of the L1 bitmap blocks.

Clearly there’s something funny going on with space management – and we need to look at a few blocks that are exhibiting the problems. But how do we find a few suitable blocks? And that’s where, finally, we get to the title of the piece.

We are in the lucky position of having “lost write protection” enabled – so the redo log file will hold lots of “block read records”. We can’t get the file and block addresses we need from the “cell physical read” wait events so let’s ask the redo log to supply them. We just have to pick a log file (online or archived) and tell Oracle to dump some of it – and we can probably get away with a fairly small dump since we want just a single type of redo record over a short period of time. Here’s an example showing the format of two slightly different commands you could execute:


alter system dump logfile '/u01/app/oracle/oradata/orcl12c/redo03.log'
        rba min 2781    1
        rba max 2781    1000
        layer 23 opcode 2
;

alter system dump redo 
        scn min 19859991 scn max 19964263 
        layer 23 opcode 2
;

The first command is to dump a log file by name – but you may have to fiddle around a bit to find the names of an archived log file because if you choose this option you need to know the sequence number (sequence# in v$log_history) of the file if you want to restrict the size of the dump. The second command simply dumps redo for (in this example) an SCN range – and it’s easy to query v$log_history to find dates, times, and SCN ranges – Oracle will work out for itself which file it has to access. In both cases I’ve restricted the dump to just those redo records that contain change vectors of type BRR (block read records) which is what the layer 23 opcode 2 line is about.

Here’s an example of a redo record that contains nothing but a single BRR. (It’s from a single-block read, a multi-block read would produce a redo record with multiple change vectors, one vector for each block read.)


REDO RECORD - Thread:1 RBA: 0x000add.00000019.01b0 LEN: 0x004c VLD: 0x10 CON_UID: 2846920952
SCN: 0x00000000025a7c13 SUBSCN:  1 05/23/2019 10:42:51
CHANGE #1 CON_ID:3 TYP:2 CLS:6 AFN:9 DBA:0x00407930 OBJ:40 SCN:0x00000000001a1e2a SEQ:2 OP:23.2 ENC:0 RBL:0 FLG:0x0000
 Block Read - afn: 9 rdba: 0x00407930 BFT:(1024,4225328) non-BFT:(1,31024)
              scn: 0x00000000001a1e2a seq: 0x02
              flags: 0x00000006 ( dlog ckval )
              where: qeilwh05: qeilbk

If you’re wondering about the two interpretations of the rdba (relative datablock address), one is for BFTs (big file tablespaces) and one for non-BFTs. The other thing you’ll notice about the interpretations is that neither file number (1024 or 1) matches the afn (absolute file number). In smaller, non-CDB databases you will probably find that the afn matches the file number in the non-BFT interpretation, but I happen to be testing on a PDB and the first file in my SYSTEM tablespace happens to be the 9th file created in the CDB – connecting as SYS in my PDB I can compare the absolute and “relative” file number very easily:


SQL> select file#, rfile#, name from v$datafile;

     FILE#     RFILE# NAME
---------- ---------- ----------------------------------------------------------------
         9          1 /u01/app/oracle/oradata/orcl12c/orcl/system01.dbf
        10          4 /u01/app/oracle/oradata/orcl12c/orcl/sysaux01.dbf
        11          9 /u01/app/oracle/oradata/orcl12c/orcl/undotbs01.dbf
        12         12 /u01/app/oracle/oradata/orcl12c/orcl/users01.dbf
        13         13 /u01/app/oracle/oradata/orcl12c/orcl/APEX_1991375173370654.dbf
        14         14 /u01/app/oracle/oradata/orcl12c/orcl/APEX_1993195660370985.dbf
        22         22 /u01/app/oracle/oradata/orcl12c/orcl/test_8k_assm.dbf
        23         23 /u01/app/oracle/oradata/orcl12c/orcl/test_8k.dbf

8 rows selected.

For bigfile tablespaces the “relative” file number is a complete fake and simply reports 1024 – you’re only allowed one file in a bigfile tablespace, so there is no “relativity” involved. (Unless you’re working at CERN and storing data about particle collisions in the LHC.)

The key point to remember when reading BRRs then, is that you should take the file number from the afn and the block number from the (appropriate) interpretation of the rdba. For the example above I would issue: “alter database dump datafile 9 block 31024;”

Finally

The originator of the thread hasn’t yet made any public response to the suggestion of dumping and reviewing blocks – possibly they’ve started a private conversation with Stefan Koehler who had suggested a strategy that examined function calls rather than block contents – so we’re unable to do any further analysis on what’s going on behind the scenes.

What we would be looking for is any indication that Oracle is repeatedly re-reading the same bitmap blocks and the same data blocks (by a simple check of block addresses); and if that is the case we would want to get some clue about why that might be happening by examining the contents of the data blocks that are subject to repeated reads without changing their status in the bitmap from “space available” to “full”.  As it is we just have to wait for the OP to tell us if they’ve made any further progress.

 

Dbvisit Standby 9 Installation on Linux (and Vagrant)

Tim Hall - Tue, 2019-06-11 03:45

The folks at Dbvisit recently released version 9 of their Dbvisit standby product.

It’s been a while since I last played with the product, so I downloaded the free trial and gave it a whirl.

I have to admit I forgot just how easy it is to work with. It feels pretty much like “unzip and go”. The result of my playtime was this article.

I also knocked up a Vagrant build, so I can easily recreate it. You can find that here.

I stuck to a basic configuration of a single instance primary (node1) and standby (node2), with the console on a separate VM (console). If you want to try something more exotic, or you are using Windows, you can get more information from the Installing Dbvisit Standby documentation.

Cheers

Tim…

PS. This isn’t a sponsored post. I’ve known the folks at Dbvisit for years so I keep an eye on what they are doing.

Dbvisit Standby 9 Installation on Linux (and Vagrant) was first posted on June 11, 2019 at 9:45 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

[Part I] Oracle and Microsoft Cloud: Interconnect Overview

Online Apps DBA - Tue, 2019-06-11 02:15

Oracle and Microsoft Cloud: Interconnect Overview Oracle has announced a Cloud interoperability partnership between Microsoft and Oracle Cloud. This cross-cloud interlink enables customers to migrate and run mission-critical enterprise workloads across Microsoft Azure and Oracle Cloud Infrastructure (OCI). Check our latest blog post at https://k21academy.com/oci38 to know more about: ▪Network Connectivity Between Oracle and Microsoft […]

The post [Part I] Oracle and Microsoft Cloud: Interconnect Overview appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Creating An ATP Instance With The OCI Service Broker

OTN TechBlog - Mon, 2019-06-10 08:03
.gist { border-left: none !important;} .code-inline { display: inline; margin: 0; padding: 1px 2px; white-space: pre !important; word-wrap: normal; font-size: inherit;} .cke_editable .gist-ph:before { content: 'Gist Content...'; display: inline-block; color: blue; }

We recently announced the release of the OCI Service Broker for Kubernetes, an implementation of the Open Service Broker API that streamlines the process of provisioning and binding to services that your cloud native applications depend on.

The Kubernetes documentation lays out the following use case for the Service Catalog API:

An application developer wants to use message queuing as part of their application running in a Kubernetes cluster. However, they do not want to deal with the overhead of setting such a service up and administering it themselves. Fortunately, there is a cloud provider that offers message queuing as a managed service through its service broker.

A cluster operator can setup Service Catalog and use it to communicate with the cloud provider’s service broker to provision an instance of the message queuing service and make it available to the application within the Kubernetes cluster. The application developer therefore does not need to be concerned with the implementation details or management of the message queue. The application can simply use it as a service.

Put simply, the Service Catalog API lets you manage services within Kubernetes that are not be deployed within Kubernetes.  Things like messaging queues, object storage and databases can be deployed with a set of Kubernetes configuration files without needing knowledge of the underlying API or tools used to create those instances thus simplifying the deployment and making it portable to virtually any Kubernetes cluster.

The current OCI Service Broker adapters that are available at this time include:

  • Autonomous Transaction Processing (ATP)
  • Autonomous Data Warehouse (ADW)
  • Object Storage
  • Streaming

I won't go into too much detail in this post about the feature, as the introduction post and GitHub documentation do a great job of explaining service brokers and the problems that they solve. Rather, I'll focus on using the OCI Service Broker to provision an ATP instance and deploy a container which has access to the ATP credentials and wallet.  

To get started, you'll first have to follow the installation instructions on GitHub. At a high level, the process involves:

  1. Deploy the Kubernetes Service Catalog client to the OKE cluster
  2. Install the svcat CLI tool
  3. Deploy the OCI Service Broker
  4. Create a Kubernetes Secret containing OCI credentials
  5. Configure Service Broker with TLS
  6. Configure RBAC (Role Based Access Control) permissions
  7. Register the OCI Service Broker

Once you've installed and registered the service broker, you're ready to use the ATP service plan to provision an ATP instance. I'll go into details below, but the overview of the process looks like so:

  1. Create a Kubernetes secret with a new admin and wallet password (in JSON format)
  2. Create a YAML configuration for the ATP Service Instance
  3. Deploy the Service Instance
  4. Create a YAML config for the ATP Service Binding
  5. Deploy the Service Binding to obtain which results in the creation of a new Kubernetes secret containing the wallet contents
  6. Create a Kubernetes secret for Microservice deployment use containing the admin password and the wallet password (in plain text format)
  7. Create a YAML config for the Microservice deployment which uses an initContainer to decode the wallet secrets (due to a bug which double encodes them) and mounts the wallet contents as a volume

Following that overview, let's take a look at a detailed example. The first thing we'll have to do is make sure that the user we're using with the OCI Service Broker has the proper permissions.  If you're using a user that is a member of the group devops then you would make sure that you have a policy in place that looks like this:

Allow group devops to manage autonomous-database in compartment [COMPARTMENT_NAME]

The next step is to create a secret that will be used to set some passwords during ATP instance creation.  Create a file called atp-secret.yaml and populate it similarly to the example below.  The values for password and walletPassword must be in the format of a JSON object as shown in the comments inline below, and must be base64 encoded.  You can use an online tool for the base64 encoding, or use the command line if you're on a Unix system (echo '{"password":"Passw0rd123456"}' | base64).

Now create the secret via: kubectl create -f app-secret.yaml.

Next, create a file called atp-instance.yaml and populate as follows (updating the name, compartmentId, dbName, cpuCount, storageSizeTBs, licenseType as necessary).  The paremeters are detailed in the full documentation (link below).  Note, we're referring to the previously created secret in this YAML file.

Create the instance with: kubectl create -f atp-instance.yaml. This will take a bit of time, but in about 15 minutes or less your instance will be up and running. You can check the status via the OCI console UI, or with the command: svcat get instances which will return a status of "ready" when the instance has been provisioned.

Now that the instance has been provisioned, we can create a binding.  Create a file called atp-binding.yaml and populate it as such:

Note that we're once again using a value from the initial secret that we created in step 1. Apply the binding with: kubectl create -f atp-binding.yaml and check the binding status with svcat get bindings, looking again for a status of "ready". Once it's ready, you'll be able to view the secret that was created by the binding via: kubectl get secrets atp-demo-binding -o yaml where the secret name matches the 'name' value used in atp-binding.yaml. The secret will look similar to the following output:

This secret contains the contents of your ATP instance wallet and next we'll mount these as a volume inside of the application deployment.  Let's create a final YAML file called atp-demo.yaml and populate it like below.  Note, there is currently a bug in the service broker that double encodes the secrets, so it's currently necessary to use an initContainer to get the values properly decoded.

Here we're just creating a basic alpine linux instance just to test the service instance. Your application deployment would use a Docker image with your application, but the format and premise would be nearly identical to this. Create the deployment with kubectl create -f atp-demo.yaml and once the pod is in a "ready" state we can launch a terminal and test things out a bit:

Note that we have 3 environment variables available in the instance:  DB_ADMIN_USER, DB_ADMIN_PWD and WALLET_PWD.  We also have a volume available at /db-demo/creds containing all of our wallet contents that we need to make a connection to the new ATP instance.

Check out the full instructions for more information or background on the ATP service broker. The ability to bind to an existing ATP instance is scheduled as an enhancement to the service broker in the near future, and some other exciting features are planned.

PeopleTools Table Reference Generator

David Kurtz - Sun, 2019-06-09 15:46
Like many other PeopleSoft professionals, I spend a lot of time looking at the PeopleTools tables because they contain meta-data about the PeopleSoft application. Much of the application is stored in PeopleTools tables. Some provide information about the Data Model. Many of my utility scripts reference the PeopleTools tables, some of them update them too. Therefore, it is very helpful to be able to understand what is in these tables. In PeopleSoft for the Oracle DBA, I discussed some tables that are of regular interest. I included the tables that correspond to the database catalogue and that are used during the PeopleSoft login procedure. The tables that are maintained by the process scheduler are valuable because they contain information about who ran what process when, and how long they ran for.
I am not the only person to have started to document the PeopleTools tables on their website or blog, most people have picked a few tables that are of particular interest. I wanted to produce a complete reference.  However, with over 4000 PeopleTools tables and views, it is simply not viable to do even a significant number of them manually.  So, I wrote some SQL and PL/SQL to dynamically generate a page for each PeopleTools table and views and put the generated pages on my website.  If you use Google to search for a PeopleTools name you will probably find the reference page.
I have now revisited that code and made a number of changes and improvements.
  • I have used a later version of PeopleTools to generate the reference on my website.  The list of PeopleTools tables is no longer defined in PeopleTools by object security, so I have used an independent and somewhat broader definition: Table or View records that are either owned by PPT or whose SQLTABLENAME is the same as the record name.
  • There is much less reliance on static pages.  There are now only 3 such pages everything else is generated.  Instead, additional data is loaded from static scripts into the PLAN_TABLE that should always be present in an Oracle database and so it doesn't have to be created.  It should be a global temporary table so there is no problem with debris being left behind or interference with other processes.  That data is then combined with data in the PeopleTools tables during the generation process.
  • The utility has been rewritten as a PL/SQL package that should be created in the PeopleSoft Owner ID schema (usually SYSADM).
  • The generated HTML is simply tidier, and more consistent.
The source is available on GitHub at https://github.com/davidkurtz/PTRef, so you can download and generate your own reference on your own current version of PeopleTools. An example of the generated output can be found on my website.
The idea of the original PeopleTools tables reference was the people could contribute additional descriptions and information that were not in the PeopleTools tables.  That can still happen, and indeed should be easier, by making changes to the scripts that load the additional data and uploading new versions to GitHub.


Dell Boomi Training: Day 8 Review/Introduction & Q/As

Online Apps DBA - Sun, 2019-06-09 09:00

[Q/A] Dell Boomi: Day 8: Cache & APIs   Dell Boomi is a business unit acquired by Dell in 2010 that specializes in cloud-based integration, API management, and Master Data Management. Dell Boomi AtomSphere is a multi-tenant cloud-based integration platform. That facilitates data and application integration.   It is a leading iPaas which helps to […]

The post Dell Boomi Training: Day 8 Review/Introduction & Q/As appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

CPU percent

Jonathan Lewis - Sun, 2019-06-09 08:31

A recent post on the ODC General Database forum asked for an explanation of the AWR report values “%Total CPU” and “%Busy CPU” under the “Instance CPU” label, and how the “%Busy CPU “ could be greater than 100%.  Here’s a text reproduction of the relevant sample supplied:

Host CPU CPUs Cores Sockets Load Average Begin Load Average End %User %System %WIO %Idle 2 2 1 0.30 1.23 10.7 5.6 5.3 77.7 Instance CPU %Total CPU %Busy CPU %DB Time waiting for CPU (Resource Manager) 29.8 133.8 0.0

The answer is probably “It’s 12.1 and it’s a programmer error”.

  • Note that the Host CPU %Idle is not consistent with the three usage figures:  10.7 + 5.6 + 5.3 = 21.6 whereas 100 – 77.7 = 22.3.
  • So let’s run with 22.3% and see what else we can notice: 29.8 / 22.3 = 1.3363 – that’s pretty close (when expressed as a percentage) to 133.8%
Hypothesis:

Someone did the division the wrong way round when trying to work out the percentage of the host’s non-idle CPU that could be attributed to the instance. In this example the “%Busy CPU” should actually report 100 * 22.3 / 29.8 = 74.8%

Note – the difference between 133.8 and 133.63 can be attributed to the fact that the various figures reported in this bit of the AWR are rounded to the nearest 1 decimal place.

Note 2 – I don’t think this error is present in 11.2.0.4 or 12.2.0.1

 

 

 

 

 

Google Jamboard ?!

Dietrich Schroff - Sun, 2019-06-09 04:24
By reorganising my Google drive i ran into "Jamboard"...
This is a tiny Google web application, where you can do some fast sketches but the functionality is really limited:








[Q/A] 1Z0-932 Oracle Cloud Infrastructure Architect Certification Day 2: IAM (Compartments, Policies, Users, Groups)

Online Apps DBA - Sun, 2019-06-09 01:14

[Q/A] 1Z0-932 Oracle Cloud Infra Architect Day2: IAM (Compartments, Policies, Users, Groups) IAM in OCI though carries 10% weightage in certification exam 1Z0-932 but it is one of the most important topics as IAM controls who has access to what OCI resources (Network, Storage, Compute, Database etc). It is important that you understand what is […]

The post [Q/A] 1Z0-932 Oracle Cloud Infrastructure Architect Certification Day 2: IAM (Compartments, Policies, Users, Groups) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Building PivotalMySQLWeb using Cloud Native Buildpacks (CNB)

Pas Apicella - Sun, 2019-06-09 00:28
Not heard of Cloud Native Buildpacks? If not this post will show why you might want to. If you want to make your developers more productive with Kubernetes, you’re going to want to look at Cloud Native Buildpacks.

https://buildpacks.io/

Until cloud platforms such as Heroku and Pivotal Cloud Foundry incorporate the Buildpack v3 Lifecycle, the fastest way to try Cloud Native Buildpacks is via the pack CLI, which integrates with your local Docker daemon. Here is an example below taking Pivotal MySQLWeb application and creating an OCI compliant image from that

Pre Steps:

1. Install pack using this link

  https://buildpacks.io/docs/install-pack/

2. Ensure you have Docker CE installed if not use this link

  https://hub.docker.com/search/?type=edition&offering=community

Steps:

1. I am using Pivotal MySQLWeb which i have packaged using maven and then taken the JAR and exploded it onto the file system to avoid compilation. You can still just use source code and the Cloud Native Buildpack's will still work but in this example I avoid the maven compilation step by using an exploded JAR file already compiled which is what a Build Service on a cloud platform would do in any case

Let's start by using "pack" to create our image as per below

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ pack build pivotal-mysql-web --path ./PivotalMySQLWeb

Using default builder image cloudfoundry/cnb:cflinuxfs3
Pulling image index.docker.io/cloudfoundry/cnb:cflinuxfs3
cflinuxfs3: Pulling from cloudfoundry/cnb
18d7ea8d445c: Pull complete
18d0be9dc457: Pull complete
f5407c34df38: Pull complete
35c61e03e6bf: Pull complete
40d144c93ada: Pull complete
4f4fb700ef54: Pull complete
0432ec3bb9f8: Pull complete
3731e128636c: Pull complete
1bab066bbafe: Pull complete
4cc53e89f635: Pull complete
4fd62e90f994: Pull complete
dc9fa77b2cd2: Pull complete
3cd4ed6e9bbf: Pull complete
a525f8221dc8: Pull complete
f01bc40f59c5: Pull complete
1f9842b1696d: Pull complete
3e15eeb884d5: Pull complete
3c0f59c7956f: Pull complete
c3e6214340d9: Pull complete
6955f2c8bfad: Pull complete
5112994886a0: Pull complete
e19195f86112: Pull complete
07fb5cd454f2: Pull complete
Digest: sha256:197439e9ccc699daa6431bd7154c80b3b0ce75b072792a0e93edd6779756f3bc
Status: Downloaded newer image for cloudfoundry/cnb:cflinuxfs3
Selected run image cloudfoundry/cnb-run:cflinuxfs3 from builder
Pulling image cloudfoundry/cnb-run:cflinuxfs3
cflinuxfs3: Pulling from cloudfoundry/cnb-run
0a25bf28c5eb: Pull complete
7216becd0525: Pull complete
Digest: sha256:f9605c5af04b2ba04918879f2bf9d37c55620ae28e73b94e9926cd97bbf8fe96
Status: Downloaded newer image for cloudfoundry/cnb-run:cflinuxfs3
Using build cache volume pack-cache-1f2556cf858e.build
Executing lifecycle version 0.2.1
===> DETECTING
[detector] Trying group 1 out of 4 with 8 buildpacks...
[detector] ======== Results ========
[detector] pass: Cloud Foundry OpenJDK Buildpack
[detector] skip: Cloud Foundry Build System Buildpack
[detector] pass: Cloud Foundry JVM Application Buildpack
[detector] skip: Cloud Foundry Azure Application Insights Buildpack
[detector] skip: Cloud Foundry Debug Buildpack
[detector] skip: Cloud Foundry Google Stackdriver Buildpack
[detector] skip: Cloud Foundry JMX Buildpack
[detector] skip: Cloud Foundry Procfile Buildpack
===> RESTORING
[restorer] cache '/cache': metadata not found, nothing to restore
===> ANALYZING
===> BUILDING
[builder] -----> Cloud Foundry OpenJDK Buildpack 1.0.0-M8
[builder] -----> OpenJDK JRE 11.0.3: Contributing to layer
[builder]        Downloading from https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.3%2B7/OpenJDK11U-jre_x64_linux_hotspot_11.0.3_7.tar.gz
[builder]        Verifying checksum
[builder]        Expanding to /layers/org.cloudfoundry.openjdk/openjdk-jre
[builder]        Writing JAVA_HOME to shared
[builder]
[builder] -----> Cloud Foundry JVM Application Buildpack 1.0.0-M8
[builder] -----> Main-Class Classpath: Contributing to layer
[builder]        Writing CLASSPATH to launch
[builder] -----> Process types:
[builder]        task: java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[builder]        web:  java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[builder]
===> EXPORTING
[exporter] Exporting layer 'app' with SHA sha256:b32618ed6b86fb496a4ce33db9df49fdd4ef16c5646b174b5643c8befcb7408a
[exporter] Exporting layer 'config' with SHA sha256:9538e967fa10f23b3415c382a3754ebf4c2645c20b6d76af519236c1181e7639
[exporter] Exporting layer 'launcher' with SHA sha256:04ca7957074763290a9abe6a067ce8c902a2ab51ed6c55102964e3f3294cdebd
[exporter] Exporting layer 'org.cloudfoundry.openjdk:openjdk-jre' with SHA sha256:e540f1464509ac673a25bd2f24c7dd6875f805c0dd35e9af84dd4669e2fd0c93
[exporter] Exporting layer 'org.cloudfoundry.jvmapplication:main-class' with SHA sha256:8537197b3f57d86a59397b89b4fbdd14900a602cc12961eae338b9ef2513cdc0
[exporter]
[exporter] *** Image: index.docker.io/library/pivotal-mysql-web:latest@8957afa91f464e2c0adc24968c31613148b9905ff1fb90ec59ff84e165d939ac
===> CACHING
[cacher] Caching layer 'org.cloudfoundry.openjdk:d2df8bc799b09c8375f79bf646747afac3d933bb1f65de71d6c78e7466ff8fe4' with SHA sha256:11439713b023be71211cb83ecd56a1be63e0c0be3e4814a18cc4c71d2264dea5
Successfully built image pivotal-mysql-web

2. Inspect the docker image on your laptop as shown below

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ docker image inspect pivotal-mysql-web

[
    {
        "Id": "sha256:8957afa91f464e2c0adc24968c31613148b9905ff1fb90ec59ff84e165d939ac",
        "RepoTags": [
            "pivotal-mysql-web:latest"
        ],
        "RepoDigests": [],
        "Parent": "",
        "Comment": "",
        "Created": "2019-06-05T05:25:58Z",
        "Container": "",
        "ContainerConfig": {

...

3. Run the docker image as shown below

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ docker run --rm -p 8080:8080 pivotal-mysql-web

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.1.0.RELEASE)

2019-06-05 05:30:43.005  INFO 1 --- [           main] c.p.p.m.PivotalMySqlWebApplication       : Starting PivotalMySqlWebApplication on 5d21f8f32ba4 with PID 1 (/workspace/BOOT-INF/classes started by vcap in /workspace)
2019-06-05 05:30:43.009  INFO 1 --- [           main] c.p.p.m.PivotalMySqlWebApplication       : No active profile set, falling back to default profiles: default
2019-06-05 05:30:44.662  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8080 (http)
2019-06-05 05:30:44.686  INFO 1 --- [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
2019-06-05 05:30:44.687  INFO 1 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet Engine: Apache Tomcat/9.0.12
2019-06-05 05:30:44.698  INFO 1 --- [           main] o.a.catalina.core.AprLifecycleListener   : The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/layers/org.cloudfoundry.openjdk/openjdk-jre/lib:/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib]
2019-06-05 05:30:44.793  INFO 1 --- [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2019-06-05 05:30:44.794  INFO 1 --- [           main] o.s.web.context.ContextLoader            : Root WebApplicationContext: initialization completed in 1736 ms
2019-06-05 05:30:45.130  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'characterEncodingFilter' to: [/*]
2019-06-05 05:30:45.131  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'webMvcMetricsFilter' to: [/*]
2019-06-05 05:30:45.131  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
2019-06-05 05:30:45.131  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'formContentFilter' to: [/*]
2019-06-05 05:30:45.132  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'requestContextFilter' to: [/*]
2019-06-05 05:30:45.132  INFO 1 --- [           main] .s.DelegatingFilterProxyRegistrationBean : Mapping filter: 'springSecurityFilterChain' to: [/*]
2019-06-05 05:30:45.133  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'httpTraceFilter' to: [/*]
2019-06-05 05:30:45.134  INFO 1 --- [           main] o.s.b.w.servlet.ServletRegistrationBean  : Servlet dispatcherServlet mapped to [/]
2019-06-05 05:30:45.436  INFO 1 --- [           main] o.s.s.concurrent.ThreadPoolTaskExecutor  : Initializing ExecutorService 'applicationTaskExecutor'
2019-06-05 05:30:45.851  INFO 1 --- [           main] .s.s.UserDetailsServiceAutoConfiguration :

Using generated security password: 3823aef6-6f72-4f5f-939d-bbd3d57ec2fa

2019-06-05 05:30:45.931  INFO 1 --- [           main] o.s.s.web.DefaultSecurityFilterChain     : Creating filter chain: Ant [pattern='/**'], []
2019-06-05 05:30:45.967  INFO 1 --- [           main] o.s.s.web.DefaultSecurityFilterChain     : Creating filter chain: any request, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@2e140e59, org.springframework.security.web.context.SecurityContextPersistenceFilter@26ae880a, org.springframework.security.web.header.HeaderWriterFilter@25a73de1, org.springframework.security.web.csrf.CsrfFilter@652ab8d9, org.springframework.security.web.authentication.logout.LogoutFilter@17814b1c, org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter@54f66455, org.springframework.security.web.authentication.ui.DefaultLoginPageGeneratingFilter@58399d82, org.springframework.security.web.authentication.ui.DefaultLogoutPageGeneratingFilter@49a71302, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@4c03a37, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@3c017078, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@298d9a05, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@5cd61783, org.springframework.security.web.session.SessionManagementFilter@771db12c, org.springframework.security.web.access.ExceptionTranslationFilter@5f303ecd, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@73ab3aac]
2019-06-05 05:30:46.000  INFO 1 --- [           main] o.s.b.a.e.web.EndpointLinksResolver      : Exposing 9 endpoint(s) beneath base path '/actuator'
2019-06-05 05:30:46.096  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2019-06-05 05:30:46.101  INFO 1 --- [           main] c.p.p.m.PivotalMySqlWebApplication       : Started PivotalMySqlWebApplication in 3.654 seconds (JVM running for 4.324)

4. Browse to localhost:8080 to invoke the application just to be sure it worked



5. Publish the OCI compliant image to your registry. In this example I am using Docker Hub as shown below

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ pack build pasapples/pivotal-mysql-web:latest --publish --path ./PivotalMySQLWeb --no-pull

Using default builder image cloudfoundry/cnb:cflinuxfs3
Selected run image cloudfoundry/cnb-run:cflinuxfs3 from builder
Using build cache volume pack-cache-a4a78257c7be.build
Executing lifecycle version 0.2.1
===> DETECTING
[detector] Trying group 1 out of 4 with 8 buildpacks...
[detector] ======== Results ========
[detector] pass: Cloud Foundry OpenJDK Buildpack
[detector] skip: Cloud Foundry Build System Buildpack
[detector] pass: Cloud Foundry JVM Application Buildpack
[detector] skip: Cloud Foundry Azure Application Insights Buildpack
[detector] skip: Cloud Foundry Debug Buildpack
[detector] skip: Cloud Foundry Google Stackdriver Buildpack
[detector] skip: Cloud Foundry JMX Buildpack
[detector] skip: Cloud Foundry Procfile Buildpack
===> RESTORING
[restorer] cache '/cache': metadata not found, nothing to restore
===> ANALYZING
===> BUILDING
[builder] -----> Cloud Foundry OpenJDK Buildpack 1.0.0-M8
[builder] -----> OpenJDK JRE 11.0.3: Contributing to layer
[builder]        Downloading from https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.3%2B7/OpenJDK11U-jre_x64_linux_hotspot_11.0.3_7.tar.gz
[builder]        Verifying checksum
[builder]        Expanding to /layers/org.cloudfoundry.openjdk/openjdk-jre
[builder]        Writing JAVA_HOME to shared
[builder]
[builder] -----> Cloud Foundry JVM Application Buildpack 1.0.0-M8
[builder] -----> Main-Class Classpath: Contributing to layer
[builder]        Writing CLASSPATH to launch
[builder] -----> Process types:
[builder]        task: java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[builder]        web:  java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[builder]
===> EXPORTING
[exporter] Exporting layer 'app' with SHA sha256:b32618ed6b86fb496a4ce33db9df49fdd4ef16c5646b174b5643c8befcb7408a
[exporter] Exporting layer 'config' with SHA sha256:9538e967fa10f23b3415c382a3754ebf4c2645c20b6d76af519236c1181e7639
[exporter] Exporting layer 'launcher' with SHA sha256:04ca7957074763290a9abe6a067ce8c902a2ab51ed6c55102964e3f3294cdebd
[exporter] Exporting layer 'org.cloudfoundry.openjdk:openjdk-jre' with SHA sha256:e540f1464509ac673a25bd2f24c7dd6875f805c0dd35e9af84dd4669e2fd0c93
[exporter] Exporting layer 'org.cloudfoundry.jvmapplication:main-class' with SHA sha256:8537197b3f57d86a59397b89b4fbdd14900a602cc12961eae338b9ef2513cdc0
[exporter]
[exporter] *** Image: index.docker.io/pasapples/pivotal-mysql-web:latest@sha256:c862eda516289c2daa29580c95b74b4d72eca9caf941a3a6ac2bf2bd886057e5
===> CACHING
[cacher] Caching layer 'org.cloudfoundry.openjdk:d2df8bc799b09c8375f79bf646747afac3d933bb1f65de71d6c78e7466ff8fe4' with SHA sha256:11439713b023be71211cb83ecd56a1be63e0c0be3e4814a18cc4c71d2264dea5
Successfully built image pasapples/pivotal-mysql-web:latest


At this point you have a OCI compliant image sitting in your registry ready to be consumed for your K8's application needs all from just source code or pre compiled source code in this example a Java Application. Let's not forget this support's a polyglot programming model so supports NodeJS, Python and anything that buildpack's supports.

More Information

1. Cloud Native Buildpacks: an Industry-Standard Build Process for Kubernetes and Beyond.

2. buildspacks.io Home Page

Categories: Fusion Middleware

Dell Boomi Training: How to check number of Connection Licenses in Boomi Account

Online Apps DBA - Sat, 2019-06-08 03:10

Connection Licenses in Boomi Account Our [New Blog] Dell Boomi: How to check the number of Connection Licenses in Boomi Account at https://k21academy.com/dellboomi21 is ready to explain: ✔ How to check the number of Connection Licenses in Boomi Account with, ✔ Step-Wise Guidance To perform the same. ✔ Types of Environments (PROD & TEST) & much […]

The post Dell Boomi Training: How to check number of Connection Licenses in Boomi Account appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator