Feed aggregator

Bealls, Inc. Drives Operational Agility with Oracle Retail Cloud

Oracle Press Releases - Tue, 2019-07-23 09:00
Press Release
Bealls, Inc. Drives Operational Agility with Oracle Retail Cloud American retailer continues digital transformation with new retail technologies

Bradenton, Florida and Redwood Shores, Calif.—Jul 23, 2019

Bealls, Inc., the company behind more than 500 stores has turned to the Oracle Retail Cloud to digitally transform its operations and drive agility. With new inventory arriving weekly, Bealls recognized the need to modernize processes and gain new management efficiencies of their extensive inventory spanning casual lifestyle, apparel and home merchandise. After a successful implementation of Oracle Human Capital Management (HMC) Cloud and Oracle Enterprise Resource Planning (ERP) Cloud, Bealls continues to modernize its technology infrastructure with Oracle Retail Merchandise Operations, Oracle Retail Home, and Oracle Retail Insights Cloud Service.

“Retail is a dynamic industry that must keep pace with the speed of fashion and continuous disruption from new ecommerce players. Cloud technology has become a strategic investment for us to compete and grow by continuously adopting new innovations and best practices from Oracle that allow us to make more informed business decisions faster than ever before,” said Dave Massey, chief digital and technology officer, Bealls. “With new insights from cloud solutions, we’re able to better understand our customers and make sure we make the right inventory investments that keep them wanting to return.”

Bealls, Inc. leverages traditional and off-price stores to tailor the customer experience. With the Oracle Retail implementation, Bealls will have more information and insight to drive its channel strategy. Logic Information Systems, an Oracle PartnerNetwork Gold level member, will implement a phased cloud deployment of the new solutions.

“To compete in the modern retail world, retailers like Bealls need to be focused on financial performance while delivering a tailored brand experience that converts consumers into advocates,” said Mike Webster, senior vice president and general manager, Oracle Retail. “By shifting to the cloud, Bealls can leverage a continuous cadence of innovation from Oracle Retail that allows them to respond to ever evolving market demands.”

Contact Info
Kaitlin Ambrogio
Oracle
+1.781.434.9555
kaitlin.ambrogio@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility and refine the customer experience. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

About Bealls Inc.

Headquartered in Bradenton, Florida since its founding in 1915, the family-owned corporation now operates more than 550 stores in 15 states under the names of Bealls, Bealls Outlet, Burkes Outlet, Home Centric and Bunulu.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kaitlin Ambrogio

  • +1.781.434.9555

Oracle Health Sciences Clinical Trial Product Named a Leader in Everest Group PEAK Matrix™

Oracle Press Releases - Tue, 2019-07-23 07:00
Press Release
Oracle Health Sciences Clinical Trial Product Named a Leader in Everest Group PEAK Matrix™ Noted as the highest scoring vendor in Vision and Capability for Life Sciences Clinical Trials Products

Redwood Shores, Calif.—Jul 23, 2019

Oracle Health Sciences was positioned as a Leader and the highest scoring vendor in Vision and Capability in the Everest Group PEAK Matrix™ 2019 Assessment of Life Sciences Clinical Trials Products. Everest Group assesses vendors based on their market impact (market adoption, value delivered and portfolio mix) and vision and capability (vision and strategy, functionality, flexibility and ease of deployment, engagement and commercial model and support).

“We are honored to be recognized again as a leader and to be positioned the highest in what we believe is a critical metric–vision and capability. With the launch of Oracle Health Sciences Clinical One, the acquisition of goBalto for study startup, and our prowess in safety, we are able to offer the innovation, stability, reliability and longevity pharma customers are looking for in a technology partner,” said Steve Rosenberg, vice president and general manager for Oracle Health Sciences. 

Everest Group highlighted that as a leader in the industry, Oracle Health Sciences has been adding new innovations to its existing eClincial solutions, enabling the industry to advance clinical trials, and make significant progress in running faster, higher quality and lower cost trials.

“The clinical trials landscape has a significant opportunity for digitalization that focuses on enhancing trial efficiency, optimizing costs, and reducing cycle time to get drugs to market faster. Currently, most life sciences firms use disparate point solutions for different functions across their clinical trials. Oracle Health Sciences with its Clinical One platform is focusing on enabling a platform-led future to help life sciences firms accelerate digital transformation. With its emphasis on developing integrated solutions to assist trial design, site management, and patient safety, Oracle Health Sciences emerges as a credible partner for life sciences firms,” said Nitish Mittal, Practice Director, Healthcare and Life Sciences Technology at Everest Group.

Everest Group identified several Oracle Health Sciences strengths that placed them as the leader in the vision and capability corner, including:

  • Acquisitions and alliances across all phases of clinical trials to build a holistic set of offerings
  • Integrated end-to-end vision and market messaging

“For decades we have worked closely with emerging firms to the household names in pharmaceuticals to tackle the most complex clinical trials,” added Rosenberg. “This is a testament to our continued dedication to the needs of our customers and commitment to helping them bring therapies to patients faster without sacrificing safety and quality.”

    • Download the Everest Group PEAK Matrix™ Summary Report
    • Learn more about Oracle Health Sciences
Contact Info
Judi Palmer
Oracle
+1 650 784 7901
judi.palmer@oracle.com
About Oracle Health Sciences

Oracle Health Sciences breaks down barriers and opens new pathways to unify people and processes to bring new drugs to market faster. As a leader in Life Sciences technology, Oracle Health Sciences is trusted by 29 of the top 30 pharma, 10 of the top 10 biotech and 10 of the top 10 CROs for managing clinical trials and pharmacovigilance around the globe.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1 650 784 7901

ORA-01000 and agent13c

Yann Neuhaus - Tue, 2019-07-23 03:51

Recently I received errors messages from OEM13c saying too many cursors were opened in a database:

instance_throughput:ORA-01000: maximum open cursors exceeded

My database had currently this open_cursors value:

SQL> show parameter open_cursors

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
open_cursors                         integer     300

I decided to increase its value to 800:

SQL> alter system set open_cursors=800;

System altered.

But a few minutes later I received again the same message. I decided to have a more precise look to discover what’s was happening.

SQL> SELECT  max(a.value) as highest_open_cur, p.value as max_open_cur FROM v$sesstat a, v$statname b, v$parameter p WHERE  a.statistic# = b.statistic#  and b.name = 'opened cursors current' and p.name= 'open_cursors' group by p.value;

HIGHEST_OPEN_CUR     MAX_OPEN_CUR
   300                 800

So I need to  find out which session is causing the error:

SQL> select a.value, s.username, s.sid, s.serial# from v$sesstat a, v$statname b, v$session s where a.statistic# = b.statistic#  and s.sid=a.sid and b.name = 'opened cursors current' and s.username is not null;

     VALUE USERNAME                              SID    SERIAL#
---------- ------------------------------ ---------- ----------
         9 SYS                                     6      36943
         1 SYS                                   108      31137
         1 SYS                                   312      15397
       300 SYS                                   417      31049
        11 SYS                                   519      18527
         7 SYS                                   619      48609
         1 SYS                                   721      51139
         0 PUBLIC                                922         37
        17 SYS                                  1024          1
        14 SYS                                  1027      25319
         1 SYS                                  1129      40925

A sys connection is using 300 cursors :=(( let’s see what it is:

Great the agent 13c is causing the problem :=((

I already encountered this kind of problem on another client’s site. In fact the agent 13c is using metrics to determine the High Availability Disk or Media Backup every 15 minutes and is using a lot of cursors. The best way is to disable those metrics to avoid ORA-01000 errors:

After reloading the agent and reevaluating the alert, the incident disappeared successfully :=)

Cet article ORA-01000 and agent13c est apparu en premier sur Blog dbi services.

Resize ACFS Volume

Michael Dinh - Mon, 2019-07-22 13:14

Current Filesystem for ACFS is 299G.

Filesystem             Size  Used Avail Use% Mounted on
/dev/asm/acfs_vol-177  299G  2.6G  248G   2% /ggdata02

Free_MB is 872 which causes paging due to insufficient FREE space from ASM Disk Group ACFS_DATA.

$ asmcmd lsdg -g ACFS_DATA
Inst_ID  State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
      1  MOUNTED  EXTERN  N         512   4096  4194304    307184      872                0             872              0             N  ACFS_DATA/
      2  MOUNTED  EXTERN  N         512   4096  4194304    307184      874                0             872              0             N  ACFS_DATA/

Review attributes for ASM Disk Group ACFS_DATA.

$ asmcmd lsattr -l -G ACFS_DATA
Name                     Value       
access_control.enabled   FALSE       
access_control.umask     066         
au_size                  4194304     
cell.smart_scan_capable  FALSE       
compatible.advm          12.1.0.0.0  
compatible.asm           12.1.0.0.0  
compatible.rdbms         12.1.0.0.0  
content.check            FALSE       
content.type             data        
disk_repair_time         3.6h        
failgroup_repair_time    24.0h       
idp.boundary             auto        
idp.type                 dynamic     
phys_meta_replicated     true        
sector_size              512         
thin_provisioned         FALSE       

Resize /ggdata02 to 250G.

$ acfsutil size 250G /ggdata02
acfsutil size: new file system size: 268435456000 (256000MB)

Review results.

$ asmcmd lsdg -g ACFS_DATA
Inst_ID  State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
      1  MOUNTED  EXTERN  N         512   4096  4194304    307184    51044                0           51044              0             N  ACFS_DATA/
      2  MOUNTED  EXTERN  N         512   4096  4194304    307184    51044                0           51044              0             N  ACFS_DATA/


$ df -h /ggdata02
Filesystem             Size  Used Avail Use% Mounted on
/dev/asm/acfs_vol-177  250G  2.6G  248G   2% /ggdata02

$ asmcmd volinfo --all
Diskgroup Name: ACFS_DATA

	 Volume Name: ACFS_VOL
	 Volume Device: /dev/asm/acfs_vol-177
	 State: ENABLED
	 Size (MB): 256000
	 Resize Unit (MB): 512
	 Redundancy: UNPROT
	 Stripe Columns: 8
	 Stripe Width (K): 1024
	 Usage: ACFS
	 Mountpath: /ggdata02

Conditionally Required Floating Item

Jeff Kemp - Mon, 2019-07-22 03:43

An item in the Universal Theme using the Optional – Floating template looks like this:

An item using the Required – Floating template looks like this:

In addition, if the item is required we would most probably set the Value Required attribute to Yes. What if the item is sometimes required but not always? How do we create a Conditionally Required field?

Firstly, we would make sure there is a Validation on the field that checks that the value is provided if required. This way, regardless of what the form may or may not send to the database, it is validated appropriately.

Secondly, to indicate to the user that the item is required or optional, based on the value of another item, we can use a Dynamic Action that sets the required item property (this triggers the client-side validation) and adds or removes the is-required class from the item’s container (this shows the little red “required” indicator on the page).

For example, let’s say that whether item P1_COST_CENTRE is required or not is dependent on whether a hidden item, P1_COST_CENTRE_REQUIRED, has the value 'Y'.

  • Create a Dynamic Action
    1. Event: Change
    2. Selection Type: Item(s)
    3. Item(s): P1_COST_CENTRE_REQUIRED
    4. Client-side Condition Type: Item = Value
    5. Item: P1_COST_CENTRE_REQUIRED
    6. Value: Y
  • Create a True Action: Execute JavaScript Code
var item = $("#P1_COST_CENTRE");
item.prop("required",true);
item.closest(".t-Form-fieldContainer").addClass("is-required");
  • Create a False Action: Execute JavaScript Code
var item = $("#P1_COST_CENTRE");
item.prop("required",false);
item.closest(".t-Form-fieldContainer").removeClass("is-required");

The above code works for all item templates (“Optional”, “Optional – Above”, “Optional – Floating”, etc.) in the Universal Theme; I’ve tested this on APEX 18.2 and 19.1.

Note: this is custom code for the Universal Theme, so it may or may not work for other themes; and might stop working in a future revision of the theme.

Plugins

UPDATE 29/7/2019: I’ve created some simple Dynamic Action plugins (for APEX 18.2 and later) to implement this, if you’re interested you can download them from here:

To use these plugins, select them as an Action to take on a Dynamic Action:

EDIT 29/7/2019: modified to use a better method to find the container div.

Older Themes

In other themes, the way that a required field is rendered is different. For example, in Theme 26 (Productivity Applications) the label for a required item is rendered in bold, along with a red asterisk; if the item is optional, no red asterisk is rendered. The way to make an item conditionally mandatory in this theme is:

  1. Set the item to use the Required template (so that the red asterisk is rendered).
  2. In the Dynamic Action JavaScript, execute the following if the item should be optional:
var itemLabel = $("label[for='P1_COST_CENTRE']");
itemLabel.removeClass("uRequired");
itemLabel.addClass("uOptional");

To make the item required again:

var itemLabel = $("label[for='P1_COST_CENTRE']");
itemLabel.removeClass("uOptional");
itemLabel.addClass("uRequired");

KSQL in Football: FIFA Women’s World Cup Data Analysis

Rittman Mead Consulting - Mon, 2019-07-22 02:53

One of the football (as per European terminology) highlights of the summer is the FIFA Women’s World Cup. France, Brazil, and the USA are the favourites, and this year Italy is present at the event for the first time in 20 years.

From a data perspective, the World Cup represents an interesting source of information. There's a lot of dedicated press coverage, as well as the standard social media excitement following any kind of big event.

The idea in this blog post is to mix information coming from two distinct channels: the RSS feeds of sport-related newspapers and Twitter feeds of the FIFA Women’s World Cup. The goal will be to understand how the sentiment of official news related to the two teams involved in the final compares to that of the tweets.

In order to achieve our targets, we'll use pre-built connectors available in Confluent Hub to source data from RSS and Twitter feeds, KSQL to apply the necessary transformations and analytics, Google’s Natural Language API for sentiment scoring, Google BigQuery for data storage, and Google Data Studio for visual analytics.

Data sources

The beginning of our journey starts with connecting to various data sources. Twitter represents the default source for most event streaming examples, and it's particularly useful in our case because it contains high-volume event streaming data with easily identifiable keywords that can be used to filter for relevant topics.

Ingesting Twitter data

Ingesting Twitter data is very easy with Kafka Connect, a framework for connecting Kafka with external systems. Within the pre-built connectors we can find the Kafka Connect Twitter, all we need to do is install it using the Confluent Hub client.

confluent-hub install jcustenborder/kafka-connect-twitter:latest

To start ingesting the Twitter data, we need to create a configuration file containing the following important bits:

  • filter.keywords: We need to list all the keywords we are interested in, separated by a comma. Since we want to check tweets from the FIFA Women’s World Cup, we’ll use FIFAWWC, representing both the World Cup Twitter handle and the most common related hashtag.
  • kafka.status.topic: This topic that will be used to store the tweets we selected. twitter_avro: This is because the connector output format is AVRO.
  • twitter.oauth: This represents Twitter credentials. More information can be found on the Twitter’s developer website.

After the changes, our configuration file looks like the following:

filter.keywords=FIFAWWC
kafka.status.topic=twitter_avro
twitter.oauth.accessToken=<TWITTER ACCESS TOKEN>
twitter.oauth.accessTokenSecret=<TWITTER ACCESS TOKEN SECRET>
twitter.oauth.consumerKey=<TWITTER ACCESS CUSTOMER KEY>
twitter.oauth.consumerSecret=<TWITTER CUSTOMER SECRET>

It's time to start it up! We can use the Confluent CLI load command:

confluent load twitter -d $TWITTER_HOME/twitter.properties

$TWITTER_HOME is the folder containing the configuration file. We can check the Kafka Connect status by querying the REST APIs with the following:

curl -s "http://localhost:8083/connectors/twitter/status" | jq [.connector.state] 
[
  "RUNNING"
]

We can also check if all the settings are correct by consuming the AVRO messages in the twitter_avro topic with a console consumer:

confluent consume twitter_avro --value-format avro

And the result is, as expected, an event stream of tweets.

RSS feeds as another data source

The second data source that we'll use for our FIFA Women’s World Cup sentiment analytics are RSS feeds from sports-related newspapers. RSS feeds are useful because they share official information about teams and players, like results, episodes, and injuries. RSS feeds should be considered neutral since they should only report facts. For this blog post, we’ll use RSS feeds as a way to measure the average sentiment of the news. As per the Twitter case above, a prebuilt Kafka Connect RSS Source exists, so all we need to do is to install it via the Confluent Hub client:

confluent-hub install kaliy/kafka-connect-rss:latest

Then, create a configuration file with the following important parameters:

  • rss.urls: This is a list of space-separated RSS feed URLs. For our Women’s World Cup example, we’ve chosen the following sources: La Gazzetta dello Sport, Transfermarkt, Eurosport, UEFA, The Guardian, Daily Mail, The Sun Daily, BBC
  • topic: The Kafka topic to write to, which is rss_avro in our case

The full configuration file looks like the following:

name=RssSourceConnector
tasks.max=1
connector.class=org.kaliy.kafka.connect.rss.RssSourceConnector
rss.urls=https://www.transfermarkt.co.uk/rss/news https://www.eurosport.fr/rss.xml https://www.uefa.com/rssfeed/news/rss.xml https://www.theguardian.com/football/rss https://www.dailymail.co.uk/sport/index.rss https://www.thesundaily.my/rss/sport http://feeds.bbci.co.uk/news/rss.xml https://www.gazzetta.it/rss/home.xml
topic=rss_avro

And again, we can start the ingestion of RSS feeds with the Confluent CLI:

confluent load RssSourceConnector -d $RSS_HOME/RssSourceConnector.properties

We can test the status of Kafka Connectors using this simple procedure, and calling it like:

./connect_status.sh
RssSourceConnector  |  RUNNING  |  RUNNING
twitter             |  RUNNING  |  RUNNING

We can see that the both the RssSourceConnector and the twitter Connect are up and running. We can then check the actual data with the console consumer.

confluent consume rss_avro --value-format avro

Below is the output as expected.

Shaping the event streams

After ingesting the Twitter and RSS event streams into topics, it’s time to shape them with KSQL. Shaping the topics accomplishes two purposes:

  1. It makes the topics queryable from KSQL
  2. It defines additional structures that can be reused in downstream applications

The Twitter stream lands in Avro format with the fields listed in the related GitHub repo. We can easily declare a TWITTER_STREAM KSQL stream on top of TWITTER_AVRO with:

CREATE STREAM TWITTER_STREAM WITH (
KAFKA_TOPIC='TWITTER_AVRO',
VALUE_FORMAT='AVRO',
TIMESTAMP='CREATEDAT'
);

There is no need to define the single fields in the event stream declaration because they are already in AVRO and thus will be sourced from the Confluent Schema Registry. Schema Registry is the component within Kafka, in charge of storing, versioning and serving the topics Avro Schemas. When a topic is in AVRO format, its schema is stored in the Schema Registry, where downstream applications (like KSQL in this case) can retrieve it and use it to “shape” the messages in the topic.

The important bits of the above KSQL for our definition are:

  • KAFKA_TOPIC='TWITTER_AVRO': the definition of the source topic
  • VALUE_FORMAT='AVRO': the definition of the source topic format
  • TIMESTAMP='CREATEDAT': the Tweet's creation date, which is used as the event timestamp

We can now check that the fields’ definition has correctly been retrieved by the Schema Registry with:

DESCRIBE TWITTER_STREAM;

Or, we can use the REST API by:

curl -X "POST" "http://localhost:8088/ksql" \
-H "Content-Type: application/vnd.ksql.v1+json; charset=utf-8" \
-d $'{
"ksql": "DESCRIBE TWITTER_STREAM;",
"streamsProperties": {}
}'

The resulting fields section will be:

"fields": [
{
"name": "ROWTIME",
"schema": {
"type": "BIGINT",
"fields": null,
"memberSchema": null
},
{
"name": "ROWKEY",
"schema": {
"type": "STRING",
"fields": null,
"memberSchema": null
},
{
"name": "CREATEDAT",
"schema": {
"type": "BIGINT",
"fields": null,
"memberSchema": null
},
{
"name": "ID",
"schema": {
"type": "BIGINT",
"fields": null,
"memberSchema": null
},
...
}

The same applies to the RSS feed contained in rss_avro with:

create stream RSS_STREAM
WITH(
KAFKA_topic='rss_avro',
TIMESTAMP='DATE',
TIMESTAMP_FORMAT='yyyy-MM-dd''T''HH:mm:ss''Z''',
VALUE_FORMAT='AVRO'
)

The result will be:

ksql> describe RSS_STREAM;
Name                 : RSS_STREAM
Field   | Type
--------------------------------------------------------------
ROWTIME | BIGINT           (system)
ROWKEY  | VARCHAR(STRING)  (system)
FEED    | STRUCT<TITLE VARCHAR(STRING), URL VARCHAR(STRING)>
TITLE   | VARCHAR(STRING)
ID      | VARCHAR(STRING)
LINK    | VARCHAR(STRING)
CONTENT | VARCHAR(STRING)
AUTHOR  | VARCHAR(STRING)
DATE    | VARCHAR(STRING)
--------------------------------------------------------------

We can also use the URL manipulation functions added in KSQL 5.2 to extract useful information from the LINK column with:

CREATE STREAM RSS_STREAM_URL_DECODING AS
SELECT LINK,
URL_EXTRACT_HOST(LINK) HOST,
URL_EXTRACT_PATH(LINK) PATH,
URL_EXTRACT_PROTOCOL(LINK) PROTOCOL,
URL_EXTRACT_QUERY(LINK) QUERY_TEXT
FROM RSS_STREAM;

The result will be:

ksql> SELECT HOST, PATH, PROTOCOL, QUERY_TXT FROM RSS_STREAM_URL_DECODING LIMIT 5;
www.dailymail.co.uk | /sport/football/article-6919585/Paul-Scholes-backs-Manchester-United-spring-surprise-Barcelona.html | https | ns_mchannel=rss&ns_campaign=1490&ito=1490
www.dailymail.co.uk | /sport/formulaone/article-6916337/Chinese-Grand-Prix-F1-race-LIVE-Shanghai-International-Circuit.html | https | ns_mchannel=rss&ns_campaign=1490&ito=1490
www.dailymail.co.uk | /sport/football/article-6919403/West-Brom-make-approach-Preston-manager-Alex-Neil.html | https | ns_mchannel=rss&ns_campaign=1490&ito=1490
www.dailymail.co.uk | /sport/football/article-6919373/Danny-Murphy-Jermaine-Jenas-fascinating-mind-games-thrilling-title-race.html | https | ns_mchannel=rss&ns_campaign=1490&ito=1490
www.dailymail.co.uk | /sport/football/article-6919215/Brazilian-legend-Pele-successfully-undergoes-surgery-remove-kidney-stone-Sao-Paulo-hospital.html | https | ns_mchannel=rss&ns_campaign=1490&ito=1490
Limit Reached
Query terminated
Sentiment analytics and Google’s Natural Language APIs

Text processing is a part of machine learning and is continuously evolving with a huge variety of techniques and related implementations. Sentiment analysis represents a branch of text analytics and aims to identify and quantify affective states contained in a text corpus.

Natural Language APIs provide sentiment scoring as a service using two dimensions:

  1. Score: Positive (Score > 0) or Negative (Score < 0) Emotion
  2. Magnitude: Emotional Content Amount

For more information about sentiment score and magnitude interpretation, refer to the documentation.

Using Natural Language APIs presents various benefits:

  • Model training: Natural Language is a pre-trained model, ideal in situations where we don't have a set of already-scored corpuses.
  • Multi-language: RSS feeds and tweets can be written in multiple languages. Google Natural Language is capable of scoring several languages natively.
  • API call: Natural Language can be called via an API, making the integration easy with other tools.
Sentiment scoring in KSQL with user- defined functions (UDFs)

The Natural Language APIs are available via client libraries in various languages, including Python, C#, and Go. For the purposes of this blog post, we'll be looking at the Java implementation since it is currently the language used to implement KSQL user-defined functions (UDFs). For more details on how to build a UD(A)F function, please refer to How to Build a UDF and/or UDAF in KSQL 5.0 by Kai Waehner, which we'll use as base for the GSentiment class definition.

The basic steps to implementing Natural Language API calls in a UDF are the following:

  1. Add the google.cloud.language JAR dependency in your project. If you are using Maven, you just need to add the following in your pom.xml

<dependency>: <groupId>com.google.cloud</groupId> <artifactId>google-cloud-language</artifactId> <version>1.25.0</version> </dependency>

  1. Create a new Java class called GSentiment.
  2. Import the required classes:

//KSQL UDF Classes import io.confluent.ksql.function.udf.Udf; import io.confluent.ksql.function.udf.UdfDescription; //Google NL Classes import com.google.cloud.language.v1.LanguageServiceClient; import com.google.cloud.language.v1.Sentiment; import com.google.cloud.language.v1.AnalyzeSentimentResponse; import com.google.cloud.language.v1.Document; import com.google.cloud.language.v1.Document.Type;

  1. Define the GSentimen class and add the Java annotations @UdfDescription(name = "gsentiment", description = "Sentiment scoring using Google NL API") public class Gsentiment { ... }.
  2. Within the class, declare gsentiment as the method accepting a String text as input. As of now, UDFs can't return two output values, so we are returning the sentiment score and magnitude as an array of double. @Udf(description = "return sentiment scoring") public List<Double> gsentiment( String text) { ... }.
  3. Within the gsentiment method, invoke the Natural Language API sentiment and cast the result as an array. Since a UDF can return only one parameter currently, we need to pipe the sentiment score and magnitude into an array of two elements.
Double[] arr = new Double[2];   
try (LanguageServiceClient languageServiceClient = LanguageServiceClient.create()) {
Document document = Document.newBuilder()
                .setContent(text)
                .setType(Type.PLAIN_TEXT)
                .build();
AnalyzeSentimentResponse response = languageServiceClient.analyzeSentiment(document);
Sentiment sentiment = response.getDocumentSentiment();

arr[0]=(double)sentiment.getMagnitude();
arr[1]=(double)sentiment.getScore();
} 
catch (Exception e) {
arr[0]=(double) 0.0;
arr[1]=(double) 0.0;
}
return Arrays.asList(arr);

  1. As mentioned in How to Build a UDF and/or UDAF in KSQL 5.0, build an uber JAR that includes the KSQL UDF and any dependencies, and copy it to the KSQL extension directory (defined in the ksql.extension.dir parameter in ksql-server.properties).
  2. Add an environment variable GOOGLE_APPLICATION_CREDENTIALS pointing to the service account key that will be used to authenticate to Google services.
  3. Restart KSQL.

At this point, we should be able to call the GSentiment UDF from KSQL:

ksql> SELECT GSENTIMENT(text) FROM TWITTER_STREAM LIMIT 5;
[0.10000000149011612, -0.10000000149011612]
[0.20000000298023224, 0.20000000298023224]
[0.5                , 0.10000000149011612]
[0.10000000149011612, 0.10000000149011612]
[0.0                , 0.0]
Limit Reached
Query terminated

As expected, the UDF returns an ARRAY of numbers. In order to get the sentiment score and magnitude in separated columns, we simply need to extract the relevant values:

ksql> SELECT GSENTIMENT(TEXT)[0] SCORE, GSENTIMENT(TEXT)[1] MAGNITUDE FROM TWITTER_STREAM LIMIT 5;
0.20000000298023224 | 0.10000000149011612
0.30000001192092896 | 0.10000000149011612
0.800000011920929   | 0.800000011920929
0.0                 | 0.0
0.30000001192092896 | 0.30000001192092896
Limit Reached
Query terminated

However, we should note that Natural Language APIs are priced per API call and, in the above SQL, we are calling the API two times—one for each GSENTIMENT call. Therefore, the above SQL will cost us two API calls per document. To optimise the cost, we can create a new event stream TWITTER_STREAM_WITH_SENTIMENT, which will physicalize in Kafka the array.

CREATE STREAM TWITTER_STREAM_WITH_SENTIMENT AS
SELECT
*,
GSENTIMENT(TEXT) AS SENTIMENT
FROM TWITTER_STREAM;

Next, parse the sentiment SCORE and MAGNITUDE from the TWITTER_STREAM_WITH_SENTIMENT event stream:

CREATE STREAM TWITTER_STREAM_WITH_SENTIMENT_DETAILS as
SELECT *,
SENTIMENT[0] SCORE,
SENTIMENT[1] MAGNITUDE
FROM TWITTER_STREAM_WITH_SENTIMENT;

With this second method, we optimize the cost with a single Natural Language API call per tweet. We can do the same with the RSS feeds by declaring:

CREATE STREAM RSS_STREAM_WITH_SENTIMENT AS
SELECT
*,
GSENTIMENT(CONTENT) SENTIMENT
FROM RSS_STREAM_FLATTENED;
CREATE STREAM RSS_STREAM_WITH_SENTIMENT_DETAILS as
SELECT *,
SENTIMENT[0] SCORE,
SENTIMENT[1] MAGNITUDE
FROM RSS_STREAM_WITH_SENTIMENT;
Sink to Google BigQuery

The following part of this blog post focuses on pushing the dataset into Google BigQuery and visual analysis in Google Data Studio.

Pushing the data into BigQuery is very easy—just install the BigQuery Sink Connector with:

confluent-hub install wepay/kafka-connect-bigquery:latest

Next, configure it while applying the following parameters (amongst others):

  • topics: defines the topic to read (in our case RSS_STREAM_WITH_SENTIMENT_DETAILS and TWITTER_STREAM_WITH_SENTIMENT_DETAILS)
  • project: the name of the Google project that we’ll use for billing
  • datasets=.*=wwc: defines the BigQuery dataset name
  • keyfile=$GOOGLE_CRED/myGoogleCredentials.json: points to the JSON file containing Google's credentials (which in our case is the same file used in the Google Natural Language scoring)

Before starting the connector, we need to ensure the BigQuery dataset named wwc (as per configuration file) exists, otherwise, the connector will fail. To do so, we can log into BigQuery, select the same project defined in the configuration file, and click on CREATE DATASET. Then, we’ll need to fill in all the details (more information about the dataset creation in the Google documentation).

After creating the dataset, it’s time to start the connector with the Confluent CLI:

confluent load bigquery-connector -d $RSS_HOME/connectorBQ.properties

If the Kafka sink works, we should see one table per topic defined in the configuration file, which are RSS_STREAM_WITH_SENTIMENT_DETAILS and TWITTER_STREAM_WITH_SENTIMENT_DETAILS in our case.

Of course, we can query the data from BigQuery itself.

Visual Analysis in Google Data Studio

To start analysing the data in Google Data Studio, simply connect to the related console and select “Blank Report.”

We’ll be asked which data source to use for the project. Thus, we need to set up a connection to the wwc dataset by clicking on CREATE NEW DATASOURCE and selecting BigQuery as the connection. Then, select the project, dataset, and table ( TWITTER_STREAM_WITH_SENTIMENT_DETAILS).


We can then review the list of columns, types, and aggregations,  adding the data source to the report.

Finally, we can start creating visualisations like tiles to show record counts, line charts for the sentiment trend, and bar charts defining the most used languages.

A more advanced visualisation like a scatterplot shows the most common hashtags and the associated average sentiment value.

Below is a map visualising the average sentiment by country.

Analysing and comparing sentiment scores

Now that we have the two streams of data coming from Twitter and RSS feeds, we can do the analysis in KSQL and, in parallel, visually in Google Data Studio. We can, for example, examine the average sentiment over a timeframe and check how one source sentiment score compares to the other.

On the 27th of June, the quarterfinal match between Norway and England was played, with the result being that England beat Norway 3–0. Let’s check if we can somehow find significant similarities in the sentiment scoring of our dataset.

Starting with the Twitter feed, we can check all the tweets including ENGLAND and NORWAY by filtering the related hashtag #NORENG. To obtain the team related overall score, I’m then assigning to each team all the tweets containing the country full name and aggregating the  SENTIMENTSCORE with the following SQL:

CREATE STREAM TWITTER_NORWAY_ENGLAND AS
SELECT
CASE
WHEN UCASE(TEXT) LIKE '%NORWAY%' THEN SENTIMENTSCORE
END AS NORWAY_SENTIMENTSCORE,
CASE
WHEN UCASE(TEXT) LIKE '%ENGLAND%' THEN SENTIMENTSCORE
END AS ENGLAND_SENTIMENTSCORE
FROM TWITTER_STREAM_WITH_SENTIMENT_DETAILS
WHERE TEXT LIKE '%#NORENG%';

We can check the overall sentiment score associated with the two teams using:

SELECT
SUM(NORWAY_SENTIMENTSCORE)/COUNT(NORWAY_SENTIMENTSCORE) AS NORWAY_AVG_SCORE,
SUM(ENGLAND_SENTIMENTSCORE)/COUNT(ENGLAND_SENTIMENTSCORE) AS ENGLAND_AVG_SCORE
FROM TWITTER_NORWAY_ENGLAND
GROUP BY 1;

The GROUP BY 1 is necessary since KSQL currently requires a GROUP BY clause when using aggregation functions like SUM. Since we don’t aggregate for any columns other than the window time, we can use the number 1 as fix aggregator for the total. The result of the above query is in line with the final score, with the winner (England) having an average sentiment score of 0.212, and the loser (Norway) having a score of 0.0979.

We can also look at the behaviour per hour with the TUMBLING KSQL windowing function:

SELECT
TIMESTAMPTOSTRING(WINDOWSTART(), 'EEE dd MMM HH') AS START_WINDOW,
SUM(NORWAY_SENTIMENTSCORE)/COUNT(NORWAY_SENTIMENTSCORE) AS NORWAY_AVG_SCORE,
SUM(ENGLAND_SENTIMENTSCORE)/COUNT(ENGLAND_SENTIMENTSCORE) AS ENGLAND_AVG_SCORE
FROM TWITTER_NORWAY_ENGLAND
WINDOW TUMBLING(SIZE 1 HOURS)
GROUP BY 1;

The query yields the following result:

Thu 27 Jun 16 | 0.18409091 | 0.16075758
Thu 27 Jun 17 | 0.14481481 | 0.13887096
Thu 27 Jun 18 | 0.14714406 | 0.12107647
Thu 27 Jun 19 | 0.07926398 | 0.34757579
Thu 27 Jun 20 | 0.10077705 | 0.13762544
Thu 27 Jun 21 | 0.08387538 | 0.17832865

We can clearly see that towards match time (19:00 BST), the ENGLAND average score has a spike, in coincidence with England’s first goal in the third minute. We can see the same on the Line Chart in Google Data Studio.

We can do a similar exercise on top of the RSS feeds stream, but first, we need to somehow filter it to get only FIFA Women’s World Cup 2019 data, since the predefined connector is ingesting all the news from the RSS sources without a topic filter. To do so, we create a new stream filtering only contents containing WOMEN and CUP:

CREATE STREAM RSS_STREAM_WITH_SENTIMENT_DETAILS_WWC as
SELECT *
FROM RSS_STREAM_WITH_SENTIMENT_DETAILS
WHERE UCASE(CONTENT) LIKE '%WOMEN%'
AND UCASE(CONTENT) LIKE '%CUP%';

We can now analyse the overall RSS sentiment with:

SELECT SUM(SENTIMENTSCORE)/COUNT(SENTIMENTSCORE)
FROM RSS_STREAM_WITH_SENTIMENT_DETAILS_WWC
GROUP BY 1;

As before, the SUM(SENTIMENTSCORE)/COUNT(SENTIMENTSCORE) is calculating the average. We can then calculate the sentiment average for the selected team. Taking the same example of ENGLAND and NORWAY used previously, we just declare a stream filtering the sentiment for the two nations. For example:

CREATE STREAM RSS_NORWAY_ENGLAND AS
SELECT
CASE WHEN UCASE(CONTENT) LIKE '%NORWAY%' THEN SENTIMENTSCORE END NORWAY_SENTIMENTSCORE,
CASE WHEN UCASE(CONTENT) LIKE '%ENGLAND%' THEN SENTIMENTSCORE END ENGLAND_SENTIMENTSCORE
FROM RSS_STREAM_WITH_SENTIMENT_DETAILS_WWC;

Then, we can analyse the separate scoring with:

SELECT
SUM(NORWAY_SENTIMENTSCORE)/COUNT(NORWAY_SENTIMENTSCORE) NORWAY_AVG_SENTIMENT,
SUM(ENGLAND_SENTIMENTSCORE)/COUNT(ENGLAND_SENTIMENTSCORE) ENGLAND_AVG_SENTIMENT
FROM RSS_NORWAY_ENGLAND
WHERE ROWTIME > STRINGTODATE('2019-06-27', 'yyyy-MM-dd')
GROUP BY 1;

The result is an average sentiment of 0.0575 for Norway and 0.111 for England, again in line with the match result where England won 3–0.

We can also understand the variation of the sentiment over time by using KSQL windowing functions like TUMBLING:

SELECT
TIMESTAMPTOSTRING(WINDOWSTART(), 'EEE dd MMM HH') START_WINDOW,
SUM(NORWAY_SENTIMENTSCORE)/COUNT(NORWAY_SENTIMENTSCORE) NORWAY_AVG_SENTIMENT,
SUM(ENGLAND_SENTIMENTSCORE)/COUNT(ENGLAND_SENTIMENTSCORE) ENGLAND_AVG_SENTIMENT
FROM RSS_NORWAY_ENGLAND WINDOW TUMBLING(SIZE 1 HOURS)
WHERE ROWTIME >= STRINGTODATE('2019-06-27', 'yyyy-MM-dd')
GROUP BY 1;

This yields the following results:

Thu 27 Jun 17 | 0.12876364 | 0.12876364
Thu 27 Jun 18 | 0.24957054 | 0.24957054
Thu 27 Jun 19 | 0.15606978 | 0.15606978
Thu 27 Jun 20 | 0.09970317 | 0.09970317
Thu 27 Jun 21 | 0.00809077 | 0.00809077
Thu 27 Jun 23 | 0.41298701 | 0.12389610

As expected from this source, most of the scoring of the two countries are the same since the number of articles is limited and almost all articles mention both ENGLAND and NORWAY.


Strangely, as we can see in the graph above, the NORWAY sentiment score on the 27th of June at 11:00 pm GMT (so after the match ended) is much higher than the ENGLAND one.

We can look at the data closely with:

SELECT ROWKEY, NORWAY_SENTIMENTSCORE, ENGLAND_SENTIMENTSCORE
from NORWAY_ENGLAND where TIMESTAMPTOSTRING(ROWTIME,'dd/MM HH') = '27/06 23';
https://www.theguardian.com/...      | 0.375974032  | 0.375974032
https://www.bbc.co.uk/.../48794550   | null         | -0.45428572
https://www.bbc.co.uk/.../48795487   | 0.449999988  | 0.449999988

We can see that  NORWAY is being associated with two articles: one from The Guardian with a positive 0.375 score and one from BBC with a positive 0.449 score. ENGLAND, on the other hand, is associated with another BBC article, having a negative -0.454 score.
We can also compare the hourly Twitter and RSS sentiment scores by creating two tables:

CREATE TABLE TWITTER_NORWAY_ENGLAND_TABLE AS
SELECT
TIMESTAMPTOSTRING(WINDOWSTART(), 'dd HH') START_WINDOW,
SUM(NORWAY_SENTIMENTSCORE)/COUNT(NORWAY_SENTIMENTSCORE) NORWAY_AVG_SCORE,
SUM(ENGLAND_SENTIMENTSCORE)/COUNT(ENGLAND_SENTIMENTSCORE) ENGLAND_AVG_SCORE
FROM TWITTER_NORWAY_ENGLAND
WINDOW TUMBLING(SIZE 1 HOURS)
GROUP BY 1;
CREATE TABLE RSS_NORWAY_ENGLAND_TABLE AS
SELECT
TIMESTAMPTOSTRING(WINDOWSTART(), 'dd HH') START_WINDOW,
SUM(NORWAY_SENTIMENTSCORE)/COUNT(NORWAY_SENTIMENTSCORE) NORWAY_AVG_SCORE,
SUM(ENGLAND_SENTIMENTSCORE)/COUNT(ENGLAND_SENTIMENTSCORE) ENGLAND_AVG_SCORE
FROM NORWAY_ENGLAND WINDOW TUMBLING(SIZE 1 HOURS)
WHERE ROWTIME >= STRINGTODATE('2019-06-27', 'yyyy-MM-dd')
GROUP BY 1;

The key of both the tables is the window start date, as we can see from:

ksql> select rowkey from RSS_NORWAY_ENGLAND_TABLE limit 1;
1 : Window{start=1561557600000 end=-}

We can then join the results together with the following statement:

SELECT A.START_WINDOW,
A.NORWAY_AVG_SENTIMENT TWITTER_NORWAY_SCORE,
A.ENGLAND_AVG_SENTIMENT TWITTER_ENGLAND_SCORE,
B.NORWAY_AVG_SENTIMENT RSS_NORWAY_SCORE,
B.ENGLAND_AVG_SENTIMENT RSS_ENGLAND_SCORE
FROM
TWITTER_NORWAY_ENGLAND_TABLE A JOIN
RSS_NORWAY_ENGLAND_TABLE B
ON A.ROWKEY = B.ROWKEY;

This yields the following result:

Thu 27 Jun 17 | 0.14481481 | 0.13887096 | 0.12876364 | 0.12876364
Thu 27 Jun 18 | 0.14714406 | 0.12107647 | 0.24957054 | 0.24957054
Thu 27 Jun 19 | 0.07926398 | 0.34757579 | 0.15606978 | 0.15606978
Thu 27 Jun 20 | 0.10077705 | 0.13762544 | 0.09970317 | 0.09970317
Thu 27 Jun 21 | 0.08387538 | 0.17832865 | 0.00809077 | 0.00809077

And the end result similarly in a Data Studio Line Chart

Interested in more?

If you’re interested in what KSQL can do, you can download the Confluent Platform to get started with the event streaming SQL engine for Apache Kafka. To help you get started, Rittman Mead provides a 30 day Kafka quick start package

This article was originally posted on the Confluent blog.

Categories: BI & Warehousing

Documentum – D2+Pack Plugins not installed correctly

Yann Neuhaus - Sun, 2019-07-21 08:43

In a previous blog, I explained how D2 can be installed in silent. In this blog, I will talk about a possible issue that might happen when doing so with the D2+Pack Plugins that aren’t being installed, even if you ask D2 to install them and while there is no message or no errors related to this issue. The first time I had this issue, it was several years ago but I never blogged about it. I faced it again recently so I thought I would this time.

So first, let’s prepare the D2 and D2+Pack packages for the silent installation. I will take the D2_template.xml file from my previous blog as a starting point for the silent parameter file:

[dmadmin@cs_01 ~]$ cd $DOCUMENTUM/D2-Install/
[dmadmin@cs_01 D2-Install]$ ls *.zip *.tar.gz
-rw-r-----. 1 dmadmin dmadmin 491128907 Jun 16 08:12 D2_4.7.0_P25.zip
-rw-r-----. 1 dmadmin dmadmin  61035679 Jun 16 08:12 D2_pluspack_4.7.0.P25.zip
-rw-r-----. 1 dmadmin dmadmin 122461951 Jun 16 08:12 emc-dfs-sdk-7.3.tar.gz
[dmadmin@cs_01 D2-Install]$
[dmadmin@cs_01 D2-Install]$ unzip $DOCUMENTUM/D2-Install/D2_4.7.0_P25.zip -d $DOCUMENTUM/D2-Install/
[dmadmin@cs_01 D2-Install]$ unzip $DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25.zip -d $DOCUMENTUM/D2-Install/
[dmadmin@cs_01 D2-Install]$ unzip $DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/C2-Dar-Install.zip -d $DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/
[dmadmin@cs_01 D2-Install]$ unzip $DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/D2-Bin-Dar-Install.zip -d $DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/
[dmadmin@cs_01 D2-Install]$ unzip $DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/O2-Dar-Install.zip -d $DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/
[dmadmin@cs_01 D2-Install]$ tar -xzvf $DOCUMENTUM/D2-Install/emc-dfs-sdk-7.3.tar.gz -C $DOCUMENTUM/D2-Install/
[dmadmin@cs_01 D2-Install]$
[dmadmin@cs_01 D2-Install]$ #See the previous blog for the content of the "/tmp/dctm_install/D2_template.xml" file
[dmadmin@cs_01 D2-Install]$ export d2_install_file=$DOCUMENTUM/D2-Install/D2.xml
[dmadmin@cs_01 D2-Install]$ cp /tmp/dctm_install/D2_template.xml ${d2_install_file}
[dmadmin@cs_01 D2-Install]$
[dmadmin@cs_01 D2-Install]$ sed -i "s,###WAR_REQUIRED###,true," ${d2_install_file}
[dmadmin@cs_01 D2-Install]$ sed -i "s,###BPM_REQUIRED###,true," ${d2_install_file}
[dmadmin@cs_01 D2-Install]$ sed -i "s,###DAR_REQUIRED###,true," ${d2_install_file}
[dmadmin@cs_01 D2-Install]$
[dmadmin@cs_01 D2-Install]$ sed -i "s,###DOCUMENTUM###,$DOCUMENTUM," ${d2_install_file}
[dmadmin@cs_01 D2-Install]$
[dmadmin@cs_01 D2-Install]$ sed -i "s,###PLUGIN_LIST###,$DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/C2-Install-4.7.0.jar;$DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/D2-Bin-Install-4.7.0.jar;$DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/O2-Install-4.7.0.jar;," ${d2_install_file}
[dmadmin@cs_01 D2-Install]$
[dmadmin@cs_01 D2-Install]$ sed -i "s,###JMS_HOME###,$DOCUMENTUM_SHARED/wildfly9.0.1," ${d2_install_file}
[dmadmin@cs_01 D2-Install]$
[dmadmin@cs_01 D2-Install]$ sed -i "s,###DFS_SDK_PACKAGE###,emc-dfs-sdk-7.3," ${d2_install_file}
[dmadmin@cs_01 D2-Install]$
[dmadmin@cs_01 D2-Install]$ read -s -p "  ----> Please enter the Install Owner's password: " dm_pw; echo; echo
  ----> Please enter the Install Owner's password: <TYPE HERE THE PASSWORD>
[dmadmin@cs_01 D2-Install]$ sed -i "s,###INSTALL_OWNER###,dmadmin," ${d2_install_file}
[dmadmin@cs_01 D2-Install]$ sed -i "s,###INSTALL_OWNER_PASSWD###,${dm_pw}," ${d2_install_file}
[dmadmin@cs_01 D2-Install]$
[dmadmin@cs_01 D2-Install]$ sed -i "s/###DOCBASE_LIST###/Docbase1/" ${d2_install_file}
[dmadmin@cs_01 D2-Install]$

 

Now that the silent file is ready and that all source packages are available, we can start the D2 Installation with the command below. Please note the usage of the tracing/debugging options as well as the usage of the “-Djava.io.tmpdir” Java option to ask D2 to put all tmp files in a specific directory, with this, D2 is supposed to trace/debug everything and use my specific temporary folder:

[dmadmin@cs_01 D2-Install]$ java -DTRACE=true -DDEBUG=true -Djava.io.tmpdir=$DOCUMENTUM/D2-Install/tmp -jar $DOCUMENTUM/D2-Install/D2_4.7.0_P25/D2-Installer-4.7.0.jar ${d2_install_file}

 

The D2 Installer printed the following extract:

...
Installing plugin: $DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/C2-Install-4.7.0.jar
Plugin install command: [java, -jar, $DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/C2-Install-4.7.0.jar, $DOCUMENTUM/D2-Install/tmp/D2_4.7.0/scripts/C6-Plugins-Install_new.xml]
Line read: [ Starting automated installation ]
Installing plugin: $DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/D2-Bin-Install-4.7.0.jar
Plugin install command: [java, -jar, $DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/D2-Bin-Install-4.7.0.jar, $DOCUMENTUM/D2-Install/tmp/D2_4.7.0/scripts/C6-Plugins-Install_new.xml]
Line read: [ Starting automated installation ]
Installing plugin: $DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/O2-Install-4.7.0.jar
Plugin install command: [java, -jar, $DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/O2-Install-4.7.0.jar, $DOCUMENTUM/D2-Install/tmp/D2_4.7.0/scripts/C6-Plugins-Install_new.xml]
Line read: [ Starting automated installation ]
Installing plugin: $DOCUMENTUM/D2-Install/tmp/D2_4.7.0/plugin/D2-Widget-Install.jar
...
...
Current line: #################################
Current line: #           Plugins               #
Current line: #################################
Current line: #plugin_1=../C2/C2-Plugin.jar
Updating line with 'plugin_'.
Updating plugin 1 with plugin name: D2-Widget-Plugin.jar and config exclude value of: false
Updating plugin 2 with plugin name: D2-Specifications-Plugin.jar and config exclude value of: false
Current line: #plugin_2=../O2/O2-Plugin.jar
Current line: #plugin_3=../P2/P2-Plugin.jar
...

 

As you can see, there are no errors so if you aren’t paying attention, you might think that the D2+Pack is properly installed. It’s not. At the end of the extract I put above, you can see that the D2 Installer is updating the plugins list with some elements (D2-Widget-Plugin.jar & D2-Specifications-Plugin.jar). If there were no issue, the D2+Pack Plugins would have been added in this section as well, which isn’t the case.

You can check all temporary files, all log files, it will not be printed anywhere that there were an issue while installing the D2+Pack Plugins. In fact, there are 3 things missing:

  • The DARs of the D2+Pack Plugins weren’t installed
  • The libraries of the D2+Pack Plugins weren’t deployed into the JMS
  • The libraries of the D2+Pack Plugins weren’t packaged in the WAR files

There is a way to quickly check if the D2+Pack Plugins DARs have been installed, just look inside the docbase config folder, there should be one log file for the D2 Core DARs as well as one log file for each of the D2+Pack Plugins. So that’s what you should get:

[dmadmin@cs_01 D2-Install]$ cd $DOCUMENTUM/dba/config/Docbase1/
[dmadmin@cs_01 Docbase1]$ ls -ltr *.log
-rw-r-----. 1 dmadmin dmadmin  62787 Jun 16 08:18 D2_CORE_DAR.log
-rw-r-----. 1 dmadmin dmadmin   4794 Jun 16 08:20 D2-C2_dar.log
-rw-r-----. 1 dmadmin dmadmin   3105 Jun 16 08:22 D2-Bin_dar.log
-rw-r-----. 1 dmadmin dmadmin   2262 Jun 16 08:24 D2-O2_DAR.log
[dmadmin@cs_01 Docbase1]$

 

If you only have “D2_CORE_DAR.log”, then you are potentially facing this issue. You could also check the “csDir” folder that you put in the D2 silent parameter file: if this folder doesn’t contain “O2-API.jar” or “C2-API.jar” or “D2-Bin-API.jar”, then you have the issue as well. Obviously, you could also check the list of installed DARs in the repository…

So what’s the issue? Well, you remember above when I mentioned the “-Djava.io.tmpdir” Java option to specifically ask D2 to put all temporary files under a certain location? The D2 Installer, for the D2 part, is using this option without issue… But for the D2+Pack installation, there is actually a hardcoded path for the temporary files which is /tmp. Therefore, it will ignore this Java option and will try instead to execute the installation under /tmp.

This is the issue I faced a few times already and it’s the one I wanted to talk about in this blog. For security reasons, you might have to deal from time to time with specific mount options on file systems. In this case, the “noexec” option was set on the /tmp mount point and therefore D2 wasn’t able to execute commands under /tmp and instead of printing an error, it just bypassed silently the installation. I had a SR opened with the Documentum Support (when it was still EMC) to see if it was possible to use the Java option and not /tmp but it looks like it’s still not solved since I had the exact same issue with the D2 4.7 P25 which was released very recently.

Since there is apparently no way to specify which temporary folder should be used for the D2+Pack Plugins, you should either perform the installation manually (DAR installation + libraries in JMS & WAR files) or remove the “noexec” option on the file system for the time of the installation:

[dmadmin@cs_01 Docbase1]$ mount | grep " /tmp"
/dev/mapper/VolGroup00-LogVol06 on /tmp type ext4 (rw,noexec,nosuid,nodev)
[dmadmin@cs_01 Docbase1]$
[dmadmin@cs_01 Docbase1]$ sudo mount -o remount,exec /tmp
[dmadmin@cs_01 Docbase1]$ mount | grep " /tmp"
/dev/mapper/VolGroup00-LogVol06 on /tmp type ext4 (rw,nosuid,nodev)
[dmadmin@cs_01 Docbase1]$
[dmadmin@cs_01 Docbase1]$ #Execute the D2 Installer here
[dmadmin@cs_01 Docbase1]$
[dmadmin@cs_01 Docbase1]$ sudo mount -o remount /tmp
[dmadmin@cs_01 Docbase1]$ mount | grep " /tmp"
/dev/mapper/VolGroup00-LogVol06 on /tmp type ext4 (rw,noexec,nosuid,nodev)

 

With the workaround in place, the D2 Installer should now print the following (same extract as above):

...
Installing plugin: $DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/C2-Install-4.7.0.jar
Plugin install command: [java, -jar, $DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/C2-Install-4.7.0.jar, $DOCUMENTUM/D2-Install/tmp/D2_4.7.0/scripts/C6-Plugins-Install_new.xml]
Line read: [ Starting automated installation ]
Line read: Current MAC address : [ Starting to unpack ]
Line read: [ Processing package: core (1/2) ]
Line read: [ Processing package: DAR (2/2) ]
Line read: [ Unpacking finished ]
Line read: [ Writing the uninstaller data ... ]
Line read: [ Automated installation done ]
Installing plugin: $DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/D2-Bin-Install-4.7.0.jar
Plugin install command: [java, -jar, $DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/D2-Bin-Install-4.7.0.jar, $DOCUMENTUM/D2-Install/tmp/D2_4.7.0/scripts/C6-Plugins-Install_new.xml]
Line read: [ Starting automated installation ]
Line read: Current MAC address : [ Starting to unpack ]
Line read: [ Processing package: core (1/2) ]
Line read: [ Processing package: DAR (2/2) ]
Line read: [ Unpacking finished ]
Line read: [ Writing the uninstaller data ... ]
Line read: [ Automated installation done ]
Installing plugin: $DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/O2-Install-4.7.0.jar
Plugin install command: [java, -jar, $DOCUMENTUM/D2-Install/D2_pluspack_4.7.0.P25/Plugins/O2-Install-4.7.0.jar, $DOCUMENTUM/D2-Install/tmp/D2_4.7.0/scripts/C6-Plugins-Install_new.xml]
Line read: [ Starting automated installation ]
Line read: Current MAC address : [ Starting to unpack ]
Line read: [ Processing package: core (1/2) ]
Line read: [ Processing package: DAR (2/2) ]
Line read: [ Unpacking finished ]
Line read: [ Writing the uninstaller data ... ]
Line read: [ Automated installation done ]
Installing plugin: $DOCUMENTUM/D2-Install/tmp/D2_4.7.0/plugin/D2-Widget-Install.jar
...
...
Current line: #################################
Current line: #           Plugins               #
Current line: #################################
Current line: #plugin_1=../C2/C2-Plugin.jar
Updating line with 'plugin_'.
Updating plugin 1 with plugin name: D2-Widget-Plugin.jar and config exclude value of: false
Updating plugin 2 with plugin name: C2-Plugin.jar and config exclude value of: false
Updating plugin 3 with plugin name: O2-Plugin.jar and config exclude value of: false
Updating plugin 4 with plugin name: D2-Specifications-Plugin.jar and config exclude value of: false
Updating plugin 5 with plugin name: D2-Bin-Plugin.jar and config exclude value of: false
Current line: #plugin_2=../O2/O2-Plugin.jar
Current line: #plugin_3=../P2/P2-Plugin.jar
...

 

As you can see above, the output is quite different: it means that the D2+Pack Plugins have been installed.

 

Cet article Documentum – D2+Pack Plugins not installed correctly est apparu en premier sur Blog dbi services.

Firefox on Linux: password field flickering / curser blinks...

Dietrich Schroff - Sat, 2019-07-20 07:53
After upgrading to firefox 68 i was not able to enter the password (and configuration password) field in online registrations.
The cursor in the password fields kept flickering/blinking and on some pages firefox completely freezes.
With other browsers this does not happen, but i wanted to stay with firefox.

The solution was starting firefox with an additional environment variable:

GTK_IM_MODULE=xim firefox

Report Google Map Plugin v1.0 Released

Jeff Kemp - Fri, 2019-07-19 11:25

Over the past couple of weeks I’ve been working on an overhaul of my Google Maps region for Oracle Application Express. This free, open-source plugin allows you to integrate fully-featured Google Maps into your application, with a wide range of built-in declarative features including dynamic actions, as well as more advanced API routines for running custom JavaScript with the plugin.

The plugin has been updated to Oracle APEX 18.2 (as that is the version my current system is using). Unfortunately this means that people still on older versions will miss out, unless someone is willing to give me a few hours on their APEX 5.0 or 5.1 instance so I can backport the plugin.

EDIT: Release 1.0.1 includes some bugfixes and a backport for APEX 5.0, 5.1 and 18.1.

The plugin is easy to install and use. You provide a SQL Query that returns latitude, longitude, and information for the pins, and the plugin does all the work to show them on the map.

The plugin has been rewritten to use the JQuery UI Widgets interface, at the suggestion of Martin D’Souza. This makes for a cleaner integration on any APEX page, and reduces the JavaScript footprint of each instance on the page if you need two or more map regions at the same time. This represented a rather steep learning curve for me personally, but I learned a lot and I’m pleased with the result. Of course, I’m sure I’ve probably missed a few tricks that the average JavaScript coder would say was obvious.

The beta releases of the plugin (0.1 to 0.10) kept adding more and more plugin attributes until it hit the APEX limit of 25 region-level attributes. This was obviously not very scaleable for future enhancements, so in Release 1.0 I ran the scythe through all the attributes and consolidated, replaced, or removed more than half of them – while preserving almost every single feature. This means v1.0 is not backwards compatible with the beta versions; although many attributes are preserved, others (including the SQL Query itself, which is rather important) would be lost in the conversion if the plugin was merely replaced. For this reason I’ve changed the Internal ID of the plugin. This is so that customers who are currently using a beta version can safely install Release 1.0 alongside it, without affecting all the pages where they are using the plugin. They can then follow the instructions to gradually upgrade each page that uses the plugin.

All of the plugin attributes relating to integrating the plugin with page items have been removed. Instead, it is relatively straightforward to use Dynamic Actions to respond to events on the map, and an API of JavaScript functions can be called to change its behaviour. All of this is fully documented and sample code can be found in the wiki.

New features include, but are not limited to:

  • Marker Clustering
  • Geo Heatmap visualisation (this replaces the functionality previous provided in a separate plugin)
  • Draggable pins
  • Lazy Load (data is now loaded in a separate Ajax call after the page is loaded)

The plugin attributes that have been added, changed or removed are listed here.

If you haven’t used this plugin before, I encourage you to give it a go. It’s a lot of fun and the possibilities presented by the Google Maps JavaScript API are extensive. You do need a Google Maps API Key which requires a Google billing account, but it is worth the trouble. It is recommended to put a HTTP Referer restriction on your API Key so that people can’t just copy your public key and use it on their own sites. For more information refer to the Installation Instructions.

If you are already using a beta version of the plugin in your application, please review the Upgrading steps before starting. Don’t panic! It’s not quite as simple as just importing the plugin into your application, but it’s not overly complicated. If you were using any of the Page Item integration attributes, you will need to implement Dynamic Actions to achieve the same behaviour. If you had any JavaScript integrations with the plugin, you will need to update them to use the new JQuery UI Widget API calls. I am keen for everyone to update to Release 1.0 as soon as possible, so I will provide free support (via email) for anyone needing help with this.

I am very keen to hear from everyone who is using the plugin, and how it is being used – please let me know in the comments below. If you notice a bug or have a great idea to enhance the plugin, please raise an issue on GitHub.

Links

Explaining Oracle’s Success in Cloud Applications

Oracle Press Releases - Wed, 2019-07-17 14:00
Blog
Explaining Oracle’s Success in Cloud Applications

By Michael Hickins—Jul 17, 2019

The University of Pittsburgh has moved its business to Oracle Cloud Applications to improve efficiency and organizational insights.

Every business and every business leader today wants to modernize using technology, all while reducing costs and, above all, eliminating complexity.

From insurance to health care to education to luxury goods, companies across all industries are juggling competing demands from employees, consumers, and partners, and are being challenged to make better decisions across the board, more quickly than ever.

And they’re looking to large business cloud providers like Oracle to not only simplify IT and make it more accessible to stakeholders, but to provide continuous improvements, such as the inclusion of intuitive AI and machine learning tools.

“Technology is an incredible enabler that can help organizations… not just streamline processes, but also improve engagement and transform existing business processes and models,” says Rondy Ng, senior vice president of applications development at Oracle.

Monte Ciotto, associate vice chancellor of financial information systems at the University of Pittsburgh, which recently decided to implement Oracle ERP Cloud, noted the critical importance of technology in the university’s ability to achieve its mission. “To maximize value and impact, our technology must be a coordinated effort across business functions,” he said. “With Oracle ERP Cloud we’ll be able to manage finance, HR and student data on the same platform, creating a single source of truth that improves efficiency and organizational insights.”

These types of organizational insights are in large part driven by analytics, AI, and ML functions of the type being embedded within the larger Oracle ERP Cloud and Oracle HCM Cloud application suites.

Oracle’s strong position is underlined by a statement from IDC cited by Oracle CEO Mark Hurd during the most recent earnings report: “Per IDC's latest annual market share results, Oracle gained the most market share globally out of all enterprise applications SaaS vendors three years running—in calendar year ‘16, ‘17 and ‘18.”*

Hurd noted a number of businesses that have recently chosen Oracle as its cloud provider, including Ferguson, a $21 billion wholesale plumbing equipment distributor, which is using Oracle ERP Cloud, along with EPM and supply chain.

Other recent converts to the Oracle Cloud Applications suite include Argo Insurance, Experian, Helmerich & Payne, Wright Medical, Emerson Electric, Rutgers University, Waste Management, and Tiffany.

If Oracle is succeeding so wildly in the enterprise cloud apps space, it’s only because it’s helping its customers succeed by making it easier for them to find answers and solutions to the mounting challenges that face them day to day.

“Education is evolving and the technology that drives our organization forward needs to reflect modern education best practices,” said Becky King, associate vice president of IT, Baylor University. “Shifting to Oracle Cloud Applications will help us introduce modern best practices that will make our organization more efficient and reach our goal of becoming a top-tier, Christian research institution. Moving core finance, planning and HR systems to one cloud-based platform will also improve business insight and enhance our ability to respond to changing dynamics in education.”

*Source: Per IDC’s latest annual market share results, Oracle gained the most market share globally out of all Enterprise Applications SaaS vendors three years running—in CY16, CY17 and CY18.

Source: IDC Public Cloud Services Tracker, April 2019. Enterprise Applications SaaS refers to the IDC SaaS markets CRM, Engineering, Enterprise Resource Management (including HCM, Financial, Enterprise Performance Management, Payroll, Procurement, Order Management, PPM, EAM), SCM, and Production and Operations Applications

Oracle Named a Leader in Retail Price Optimization Applications Worldwide

Oracle Press Releases - Wed, 2019-07-17 07:00
Press Release
Oracle Named a Leader in Retail Price Optimization Applications Worldwide Recognized for providing next-generation retail planning in the age of curated merchandise orchestration

Redwood Shores, Calif.—Jul 17, 2019

Oracle was recently named a Leader in the IDC MarketScape: Worldwide Retail Price Optimization Applications 2019 Vendor Assessment.1 The list includes price optimization vendors focused on the B2C business model based on their client base, use of advanced analytics, machine learning and artificial intelligence (AI), and prominence on enterprise retailers’ buying shortlist. Oracle Retail’s placement as a leader underscores its continued investment in analytics, optimization, and data sciences. With these offerings, retailers gain an unprecedented line of sight into the future performance of their assortments and the impact every promotional offer will have on the bottom line.

The IDC MarketScape report notes that “that traditional retail planning, of which life-cycle price optimization is a key part, has run its course. We’ve defined next-generation retail planning as curated merchandise orchestration (CMO). CMO is the central nervous system of enterprise and ecosystem signals that harmonizes its own and adjacent processes from design to deliver. Pricing spans the curating and orchestrating sides of CMO as a flywheel to create sufficient demand and efficient sell-through of inventory.” In the report, IDC recognized Oracle for their deep expertise and technology assets in complementary merchandising and supply chain planning analytics, execution, and operations as well as its omni-channel commerce platform.

Oracle Retail’s price, promotion, and markdown optimization applications leverage Oracle Retail Science Platform Cloud Service which combines AI, machine learning, and decision science with data captured from Oracle Retail SaaS applications as well as third-party data. The unique property of these self-learning applications is that they detect trends, learn from results, and increase their accuracy the more they are used, adding massive amounts of contextual data to get a clearer picture on what motivates outcomes. These pricing applications are complemented by a broad suite of retail planning, optimization, and execution applications—particularly financial planning.

“Retailers have a big opportunity to leverage machine learning and retail science to bring their business to the next level of ‘curated merchandise orchestration,’” said Jeff Warren, vice president, Oracle Retail. “Oracle provides retailers with next-practice price optimization applications based on our Retail Science Platform, empowering them to automate and predict the impact of pricing decisions while simultaneously delivering offers that delight their customers and increase redemption rates.”

Oracle Retail solutions considered in the IDC MarketScape: Worldwide Price Optimization Applications, 2019 Vendor Assessment, include Oracle Retail Merchandising System Cloud Service, Oracle Retail Customer Engagement Cloud Service, Oracle Retail Planning and Optimization Suite, and the Oracle Retail Science Platform Cloud Service.

Access a complimentary excerpt copy of the report for further details.

1 IDC MarketScape: Worldwide Retail Price Optimization Applications 2019 Vendor Assessment, doc #US45034619,  May 2019

Contact Info
Kris Reeves
Oracle
+1.650.506.5942
kris.reeves@oracle.com
About IDC MarketScape

IDC MarketScape vendor assessment model is designed to provide an overview of the competitive fitness of ICT (information and communications technology) suppliers in a given market. The research methodology utilizes a rigorous scoring methodology based on both qualitative and quantitative criteria that results in a single graphical illustration of each vendor’s position within a given market. IDC MarketScape provides a clear framework in which the product and service offerings, capabilities and strategies, and current and future market success factors of IT and telecommunications vendors can be meaningfully compared. The framework also provides technology buyers with a 360-degree assessment of the strengths and weaknesses of current and prospective vendors.

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations, and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kris Reeves

  • +1.650.506.5942

Schedule reboots of your AWS instances and how that can result in a hard reboot and corruption

Yann Neuhaus - Wed, 2019-07-17 02:50

From time to time you might require to reboot your AWS instances. Maye you applied some patches or for whatever reason. Rebooting an AWS instance can be done in several ways: You can of course do that directly from the AWS console. You can use the AWS command line utilities as well. If you want to schedule a reboot you can either do that using CloudWatch or you can use SSM Maintenance Windows for that. In this post we will only look at CloudWatch and System Manager as these two can be used to schedule the reboot easily using AWS native utilities. You could, of course, do that as well by using cron and the AWS command line utilities but this is not the scope of this post.

For CloudWatch the procedure for rebooting instances is the following: Create a new rule:

Go for “Schedule” and give a cron expression. In this case it means: 16-July-2019 at 07:45. Select the “EC2 RebootInstances API call” and provide the instance IDs you want to have rebooted. There is one limitation: You can only add up to five targets. If you need more then you have to use System Manager as described later in this post. You should pre-create an IAM role with sufficient permissions which you can use for this as otherwise a new one will be created each time.

Finally give a name and a description, that’s it:


Once time reaches your cron expression target the instance(s) will reboot.

The other solution for scheduling stuff against many instances is to use AWS SSM. It requires a bit more preparation work but in the end this is the solution we decided to go for as more instances can be scheduled with one maintenance window (up to 50) and you could combine several tasks, e.g. executing something before doing the reboot and do something else after the reboot.

The first step is to create a new maintenance window:

Of course it needs a name and an optional description:

Again, in this example, we use a cron expression for the scheduling (some as above in the CloudWatch example). Be aware that this is UTC time:

Once the maintenance window is created we need to attach a task to it. Until now we only specified a time to run something but we did not specify what to run. Attaching a task can be done in the task section of the maintenance window:

In this case we go for an “Automation task”. Name and description are not required:

The important part is the document to run, in our case it is “AWS-RestartEC2Instance”:

Choose the instances you want to run the document against:

And finally specify the concurrency and error count and again, an IAM role with sufficient permissions to perform the actions defined in the document:

Last, but not least, specify a pseudo parameter called “{TARGET_ID}” which will tell AWS SSM to run that against all the instances you selected in the upper part of the screen:

That’s it. Your instances will be rebooted at the time you specified in the cron expression. All fine and easy and you never have to worry about scheduled instance reboots. Just adjust the cron expression and maybe the list of instances and you are done for the next scheduled reboot. Really? We did it like that against 100 instances and we got a real surprise. What happened? Not many, but a few instances have been rebooted hard and one of them even needed to be restored afterwards. Why that? This never happened in the tests we did before. When an instance does not reboot within 4 minutes AWS performs a hard reboot. This can lead to corruption as stated here. When you have busy instances at the time of the reboot this is not what you want. On Windows you get something like this:

You can easily reproduce that by putting a Windows system under heavy load with a cpu stress test and then schedule a reboot as described above.

In the background the automation document calls aws:changeInstanceState and that comes with a force parameter:

… and here we have it again: Risk of corruption. When you take a closer look at the automation document that stops an EC2 instance you can see that as well:

So what is the conclusion of all this? It is not to blame AWS for anything, all is documented and works as documented. Testing in a test environment does not necessarily mean it works on production as well. Even if it is documented you might not expect it because your tests went fine and you missed that part of the documentation where the behavior is explained. AWS System Manager still is a great tool for automating tasks but you really need to understand what happens before implementing it in production. And finally: Working on public clouds make many things easier but others harder to understand and troubleshoot.

Cet article Schedule reboots of your AWS instances and how that can result in a hard reboot and corruption est apparu en premier sur Blog dbi services.

Age Range of Generations & Customer Demographics

VitalSoftTech - Tue, 2019-07-16 10:09

Being an online content producer and marketer, one of your primary concerns should be is your content and ad on your website, reaching the correct customer demographics and age ranges of generations?  To think of it, the question of who is your target audience is a challenging question. You must answer this question to make […]

The post Age Range of Generations & Customer Demographics appeared first on VitalSoftTech.

Categories: DBA Blogs

Downloading And Installing JDK 8 for OIC Connectivity Agent

Online Apps DBA - Tue, 2019-07-16 06:25

[Downloading And Installing JDK 8 for OIC Connectivity Agent] Some adapters/connectors use a connectivity agent to establish a connection with the on-premise system while a connectivity agent uses the JVM (Java Virtual Machine) to run code on the on-premise system. So, are you looking for the steps to ⬇Download and Install JDK Version 8 on […]

The post Downloading And Installing JDK 8 for OIC Connectivity Agent appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Intermittent cellsrv crashes with ORA-07445 after upgrading to 12.2.1.1.7

Syed Jaffar - Tue, 2019-07-16 04:05
Exadata X6-2 full and half racks were patched recently with 12.2.1.1.7 Aug/2018 quarterly patch set. An ORA-07445 started to observe and the cellsrv intermittently crashing with ORA-07445 error. 

Following error is noticed in the cellsrv alert.log:

ORA-07445: exception encountered: core dump [0000000000000000+0] [11] [0x000000000] [] [] [] 

The above is registered as a bug whenever cell storage is patched with 12.2.1.1.7 or 12.2.1.1.8. Therefore, if you are planning to patch your cell storage software with one of the above versions, ensure you also apply patch 28181789 to avoid cellsrv intermittently crashing with ORA-07445 error. Otherwise, you need to upgrade the storage software with 18.1.5.

Symptom

Customer may experience intermittent cellsrv crashes with ORA-07445: [0000000000000000+0] [11] [0x000000000] after upgrading storage cell software version to 12.2.1.1.7 or 12.2.1.1.8

Change

Upgrade the Storage cell software to 12.2.1.1.7 or 12.2.1.1.8

Cause

Bug 28181789 - ORA-07445: [0000000000000000+0] AFTER UPGRADING CELL TO 12.2.1.1.7

Fix

Patch 28181789 is available for 12.2.1.1.7 and 12.2.1.1.8 Storage software releases. Follow the README to instructions to apply the patch.
~OR~
apply 18.1.5 and later which includes the fix of 28181789

References;
xadata: Intermittent cellsrv crashes with ORA-07445: [0000000000000000+0] [11] [0x000000000] after upgrading to 12.2.1.1.7 or 12.2.1.1.8 (Doc ID 2421083.1)

Oracle EBS 12.2 - adop phase=cleanup fails with "library cache pin" waits

Senthil Rajendran - Tue, 2019-07-16 03:24
adop phase=cleanup fails with "library cache pin" waits

When running cleanup action in adop , it hung for ever.

Lets dig deeper to see the problem

SQL> SELECT s.sid,
       s.username,
       s.program,
       s.module from v$session s where module like '%AD_ZD%';

2313 APPS       perl@pwercd01vn074 (TNS V1-V3)                   AD_ZD                                                            


SQL> select event from v$session where sid=2313 ;

EVENT
----------------------------------------------------------------
library cache pin


SQL>


SQL> select decode(lob.kglobtyp, 0, 'NEXT OBJECT', 1, 'INDEX', 2, 'TABLE', 3, 'CLUSTER',
                      4, 'VIEW', 5, 'SYNONYM', 6, 'SEQUENCE',
  2    3                        7, 'PROCEDURE', 8, 'FUNCTION', 9, 'PACKAGE',
  4                        11, 'PACKAGE BODY', 12, 'TRIGGER',
  5                        13, 'TYPE', 14, 'TYPE BODY',
  6                        19, 'TABLE PARTITION', 20, 'INDEX PARTITION', 21, 'LOB',
                      22, 'LIBRARY', 23, 'DIRECTORY', 24, 'QUEUE',
                      28, 'JAVA SOURCE', 29, 'JAVA CLASS', 30, 'JAVA RESOURCE',
                      32, 'INDEXTYPE', 33, 'OPERATOR',
                      34, 'TABLE SUBPARTITION', 35, 'INDEX SUBPARTITION',
                      40, 'LOB PARTITION', 41, 'LOB SUBPARTITION',
                      42, 'MATERIALIZED VIEW',
                      43, '  7  DIMENSION',
                      44, 'CONTEXT', 46, 'RULE SET', 47, 'RESOURCE PLAN',
                      48, 'CONSUMER GROUP',
                      51, 'SUBSCRIPTION', 52, 'LOCATION',
                      55, 'XML SCHEMA', 56, 'JAVA DATA',
                      57, 'SECURITY PR  8  OFILE', 59, 'RULE',
                      62, 'EVALUATION CONTEXT',
                     'UNDEFINED') object_type,
       lob.KGLNAOBJ object_name,
       pn.KGLPNMOD lock_mode_held,
       pn.KGLPNREQ lock_mode_requested,
       ses.sid,
       ses.serial#,
       ses.userna  9  me
  FROM
       x$kglpn pn,
       v$session ses,
       x$kglob lob,
       v$session_wait vsw
  WHERE
   pn.KGLPNUSE = ses.saddr and
   pn.KGLPNHDL = lob.KGLHDADR
   and lob.kglhdadr = vsw.p1raw
   and vsw.event = 'library cache pin'
 10  order by lock_mode_held desc



OBJECT_TYPE        OBJECT_NAME          LOCK_MODE_HELD LOCK_MODE_REQUESTED        SID    SERIAL# USERNAME
------------------ -------------------- -------------- ------------------- ---------- ---------- ------------------------------
PACKAGE            DBMS_SYS_SQL                      2                   0       3822      60962 APPS
PACKAGE            DBMS_SYS_SQL                      2                   0       2313      27404 APPS
PACKAGE            DBMS_SYS_SQL                      2                   0       2313      27404 APPS
PACKAGE            DBMS_SYS_SQL                      2                   0       3822      60962 APPS
PACKAGE            DBMS_SYS_SQL                      0                   2       3821      14545
PACKAGE            DBMS_SYS_SQL                      0                   2       3821      14545
PACKAGE            DBMS_SYS_SQL                      0                   3       2313      27404 APPS

PACKAGE            DBMS_SYS_SQL                      0                   3       2313      27404 APPS


SQL> select
  2   distinct
   ses.ksusenum sid, ses.ksuseser serial#, ses.ksuudlna username,ses.ksuseunm machine,
   ob.kglnaown obj_owner, ob.kglnaobj obj_name
  3    4    5     ,pn.kglpncnt pin_cnt, pn.kglpnmod pin_mode, pn.kglpnreq pin_req
   , w.state, w.event, w.wait_Time, w.seconds_in_Wait
   -- lk.kglnaobj, lk.user_name, lk.kgllksnm,
  6    7    8     --,lk.kgllkhdl,lk.kglhdpar
   --,trim(lk.kgllkcnt) lock_cnt, lk.kgllkmod lock_mode, lk.kgllkreq lock_req,
  9   10     --,lk.kgllkpns, lk.kgllkpnc,pn.kglpnhdl
 from
  x$kglpn pn,  x$kglob ob,x$ksuse ses
   , v$session_wait w
 11   12   13   14  where pn.kglpnhdl in
 15  (select kglpnhdl from x$kglpn where kglpnreq >0 )
and ob.kglhdadr = pn.kglpnhdl
and pn.kglpnuse = ses.addr
and w.sid = ses.indx
order by seconds_in_wait desc
/

       SID    SERIAL# USERNAME                       MACHINE                        OBJ_OWNER            OBJ_NAME                PIN_CNT   PIN_MODE    PIN_REQ STATE               EVENT                                                         WAIT_TIME SECONDS_IN_WAIT
---------- ---------- ------------------------------ ------------------------------ -------------------- -------------------- ---------- ---------- ---------- ------------------- ---------------------------------------------------------------- ---------- ---------------
      2313      27404 APPS                           apprnrcod05                    SYS                  DBMS_SYS_SQL                  0          0          3 WAITING             library cache pin                                             0              701
      2313      27404 APPS                           apprnrcod05                    SYS                  DBMS_SYS_SQL                  2          2          0 WAITING             library cache pin                                             0              701
       803      46104 APPS                           orarnrcod05                    SYS                  DBMS_SYS_SQL                  2          2          0 WAITED SHORT TIME   control file sequential read                                 -1               24



Other errors on the adop logs , you can also use scanlogs to verify the errors.

[ERROR] [CLEANUP 1:1 ddl_id=69120] ORA-04020: deadlock detected while trying to lock object SYS.DBMS_SYS_SQL SQL: begin sys.ad_grants.cleanup; end;


Reference
Adop Cleanup Issue: "[ERROR] [CLEANUP] ORA-04020: deadlock detected " (Doc ID 2424333.1)

Fix : 

SQL> select count(1)
from dba_tab_privs
where table_name='DBMS_SYS_SQL'
and privilege='EXECUTE'
and grantee='APPS'  2    3    4    5  ;

  COUNT(1)
----------
         1

SQL> exec sys.ad_grants.cleanup;

PL/SQL procedure successfully completed.

SQL> select count(1)
from dba_tab_privs
where table_name='DBMS_SYS_SQL'
and privilege='EXECUTE'
and grantee='APPS'  2    3    4    5  ;

  COUNT(1)
----------
         0

SQL>



now adop cleanup fine without any issues.





Fix for Oracle VBCS "Error 404--Not Found"

Andrejus Baranovski - Tue, 2019-07-16 01:13
We are using Pay As You Go Oracle VBCS instance and had an issue with accessing VBCS home page after starting the service. The service was started successfully, without errors. But when accessing VBCS home page URL - "Error 404--Not Found" was returned.

I raised a support ticket and must say - received the response and help promptly. If you would encounter similar issue yourself, hopefully, this post will share some light.

Apparently "Error 404--Not Found" was returned, because VBCS instance wasn't initialized during instance start. It wasn't initialized, because of expired password for VBCS DB schemas. Yes, you read it right - internal system passwords expire in Cloud too.

Based on the instructions given by Oracle Support, I was able to extract logs from VBCS WebLogic instance (by connecting through SSH to VBCS cloud machine) and provide it to support team (yes, VBCS runs on WebLogic). They found password expire errors in the log, similar to this:

weblogic.application.ModuleException: java.sql.SQLException: ORA-28001: the password has expired

Based on provided instructions, I extracted VBCS DB schema name and connected through SQL developer. Then I executed SQL statement given by support team to reset all VBCS DB passwords in bulk. Next password expiry is set for 2019/January. Should it expire at all?

Summary: If you would encounter "Error 404--Not Found" after starting VBCS instance and trying to access VBCS home page, then most likely (but not always) it will be related to VBCS DB schema password expiry issue.

Delete MGMTDB and MGMTLSNR from OEM using emcli

Michael Dinh - Mon, 2019-07-15 18:22

Doc ID 1933649.1, MGMTDB and MGMTLSNR should not be monitored.

$ grep oms /etc/oratab 
oms:/u01/middleware/13.2.0:N

$ . oraenv <<< oms

$ emcli login -username=SYSMAN
Enter password : 
Login successful

$ emcli sync
Synchronized successfully

$ emcli get_targets -targets=oracle_listener -format=name:csv|grep -i MGMT
1,Up,oracle_listener,MGMTLSNR_host01

$ emcli delete_target -name="MGMTLSNR_host01" -type="oracle_listener" 
Target "MGMTLSNR_host01:oracle_listener" deleted successfully

$ emcli sync
$ emcli get_targets|grep -i MGMT

Note: MGMTDB was not monitored and can be deleted as follow:

$ emcli get_targets -targets=oracle_database -format=name:csv|grep -i MGMT
$ emcli delete_target -name="MGMTDB_host01" -type="oracle_database" 

The problem with monitoring MGMTDB and MGMTLSNR is getting silly page when they are relocated to a new host.

Host=host01
Target type=Listener 
Target name=MGMTLSNR_host01
Categories=Availability 
Message=The listener is down:

Dealing with the same issue for scan listener and have not reached an agreement to have them deleted as I and a few others think they should not be monitored.
Unfortunately, there is no official Oracle documentation for this.

Here’s a typical page for when all scan listeners are running from only one node.

Host=host01
Target type=Listener
Target name=LISTENER_SCAN2_cluster
Categories=Availability
Message=The listener is down: 

$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node node02
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node node02
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node node02

Email Spoofing

Yann Neuhaus - Mon, 2019-07-15 11:07

Have you ever had this unhealthy sensation of being accused of facts that do not concern you? To feel helpless in the face of an accusing mail, which, because of its imperative and accusing tone, has the gift of throwing us the opprobrium?

This is the purpose of this particular kind of sextortion mail that uses spoofing, to try to extort money from you. A message from a supposed “hacker” who claims to have hacked into your computer. He threatens you with publishing compromising images taken without your knowledge with your webcam and asks you for a ransom in virtual currency most of the time.

Something like that:

 

Date:  Friday, 24 May 2019 at 09:19 UTC+1
Subject: oneperson
Your account is hacked! Renew the pswd immediately!
You do not heard about me and you are definitely wondering why you’re receiving this particular electronic message, proper?
I’m ahacker who exploitedyour emailand digital devicesnot so long ago.
Do not waste your time and make an attempt to communicate with me or find me, it’s not possible, because I directed you a letter from YOUR own account that I’ve hacked.
I have started malware to the adult vids (porn) site and suppose that you watched this website to enjoy it (you understand what I mean).
Whilst you have been keeping an eye on films, your browser started out functioning like a RDP (Remote Control) that have a keylogger that gave me authority to access your desktop and camera.
Then, my softaquiredall data.
You have entered passcodes on the online resources you visited, I intercepted all of them.
Of course, you could possibly modify them, or perhaps already modified them.
But it really doesn’t matter, my app updates needed data regularly.
And what did I do?
I generated a reserve copy of every your system. Of all files and personal contacts.
I have managed to create dual-screen record. The 1 screen displays the clip that you were watching (you have a good taste, ha-ha…), and the second part reveals the recording from your own webcam.
What exactly must you do?
So, in my view, 1000 USD will be a reasonable amount of money for this little riddle. You will make the payment by bitcoins (if you don’t understand this, search “how to purchase bitcoin” in Google).
My bitcoin wallet address:
1816WoXDtSmAM9a4e3HhebDXP7DLkuaYAd
(It is cAsE sensitive, so copy and paste it).
Warning:
You will have 2 days to perform the payment. (I built in an exclusive pixel in this message, and at this time I understand that you’ve read through this email).
To monitorthe reading of a letterand the actionsin it, I utilizea Facebook pixel. Thanks to them. (Everything thatis usedfor the authorities may helpus.)

In the event I do not get bitcoins, I shall undoubtedly give your video to each of your contacts, along with family members, colleagues, etc?

 

Users who are victims of these scams receive a message from a stranger who presents himself as a hacker. This alleged “hacker” claims to have taken control of his victim’s computer following consultation of a pornographic site (or any other site that morality would condemn). The cybercriminal then announces having compromising videos of the victim made with his webcam. He threatens to publish them to the victim’s personal or even professional contacts if the victim does not pay him a ransom. This ransom, which ranges from a few hundred to several thousand dollars, is claimed in a virtual currency (usually in Bitcoin but not only).

To scare the victim even more, cybercriminals sometimes go so far as to write to the victim with his or her own email address, in order to make him or her believe that they have actually taken control of his or her account. 

First of all, there is no need to be afraid of it. Indeed, if the “piracy” announced by cybercriminals is not in theory impossible to achieve, in practice, it remains technically complex and above all time-consuming to implement. Since scammers target their victims by the thousands, it can be deduced that they would not have the time to do what they claim to have done. 

These messages are just an attempt at a scam. In other words, if you receive such a blackmail message and do not pay, nothing more will obviously happen. 

Then, no need to change your email credentials. Your email address is usually something known and already circulates on the Internet because you use it regularly on different sites to identify and communicate. These sites have sometimes resold or exchanged their address files with different partners more or less scrupulous in marketing objectives.

If cybercriminals have finally written to you with your own email address to make you believe that they have taken control of it: be aware that the sender’s address in a message is just a simple display that can very easily be usurped without having to have a lot of technical skills. 

In any case, the way to go is simple: don’t panic, don’t answer, don’t pay, just throw this mail in the trash (and don’t forget to empty it regularly). 

On the mail server side, setting up certain elements can help to prevent this kind of mail from spreading in the organization. This involves deploying the following measures on your mail server:

  •       SPF (Sender Policy Framework): This is a standard for verifying the domain name of the sender of an email (standardized in RFC 7208 [1]). The adoption of this standard is likely to reduce spam. It is based on the SMTP (Simple Mail Transfer Protocol) which does not provide a sender verification mechanism. SPF aims to reduce the possibility of spoofing by publishing a record in the DNS (Domain Name Server) indicating which IP addresses are allowed or forbidden to send mail for the domain in question.
  •         DKIM (DomainKeys Identified Mail): This is a reliable authentication standard for the domain name of the sender of an email that provides effective protection against spam and phishing (standardized in RFC 6376 [2]). DKIM works by cryptographic signature, verifies the authenticity of the sending domain and also guarantees the integrity of the message.
  •       DMARC (Domain-based Message Authentication, Reporting and Conformance): This is a technical specification to help reduce email misuse by providing a solution for deploying and monitoring authentication issues (standardized in RFC 7489 [3]). DMARC standardizes the way how recipients perform email authentication using SPF and DKIM mechanisms.

 

REFERENCES

[1] S. Kitterman, “Sender Policy Framework (SPF),” ser. RFC7208, 2014, https://tools.ietf.org/html/rfc7208

[2] D. Crocker, T. Hansen, M. Kucherawy, “DomainKeys Identified Mail (DKIM) Signatures” ser. RFC6376, 2011,  https://tools.ietf.org/html/rfc6376

[3] M. Kuchewary, E. Zwicky, “Domain-based Message Authentication, Reporting and Conformance (DMARC)”, ser. RFC7489, 2015, https://tools.ietf.org/html/rfc7489

Cet article Email Spoofing est apparu en premier sur Blog dbi services.

Forecast Model Tuning with Additional Regressors in Prophet

Andrejus Baranovski - Mon, 2019-07-15 04:17
I’m going to share my experiment results with Prophet additional regressors. My goal was to check how extra regressor would weight on forecast calculated by Prophet.

Using dataset from Kaggle — Bike Sharing in Washington D.C. Dataset. Data comes with a number for bike rentals per day and weather conditions. I have created and compared three models:

1. Time series Prophet model with date and number of bike rentals
2. A model with additional regressor —weather temperature
3. A model with additional regressor s— weather temperature and state (raining, sunny, etc.)

We should see the effect of regressor and compare these three models.

Read more in my Towards Data Science post.

Pages

Subscribe to Oracle FAQ aggregator