Feed aggregator

Economics and Innovations of Serverless

OTN TechBlog - Fri, 2019-04-19 13:08

The term serverless has been one of the biggest mindset changes since the term cloud, and learning how to “think serverless” should be part of every developers cloud-native journey. This is why one of Oracle’s 10 Predictions for Developers in 2019 is “The Economics of Serverless Drives Innovation on Multiple Fronts”. Let’s unpack what we mean by economics and innovation while covering a few common misconceptions.

The Economics

Cost is only part of the story

I often hear “cost reduction” as a key driver of serverless architectures. Everyone wants to save money and be a hero for their organization. Why pay for a full time server when you can pay per function millisecond? The ultimate panacea of utility computing — pay for exactly what you need and no more. This is only part of the story.

Economics is a broad term for the production, distribution, and consumption of things. Serverless is about producing software. And software is about using computers as leverage to produce non-linear value. Facebook (really MySpace) leveraged software to change the way the world connected. Uber leveraged software to transform the transportation industry. Netflix leveraged software to change the way the world consumed movies. Software is transforming every major company in every major industry, and for most, is now at the heart of how they deliver value to end users. So why the fuss about serverless?

Serverless is About Driving Non-Linear Value

Because serverless is ultimately about driving non-linear business value which can fundamentally change the economics of your business. I’ve talked about this many times , but Ben nails it — “serverless is a ladder. You’re climbing to some nirvana where you get to deliver pure business value with no overhead.”

Pundits point out that “focus on business value” has been said many times over the years, and they’re right. But every software architecture cycle learns from past cycles and incorporates new ways to achieve this goal of greater focus, which is why serverless is such an important cycle to watch. It effectively incorporates the promise (and best) of cloud with the promise (and learnings) of SOA .

Ultimately the winning businesses reduce overhead while increasing value to their customers by empowering their developers. That’s why the economics are too compelling to ignore. Not because your CRON job server goes from $30 to $0.30/month (although a nice use case), but because creating a culture of innovation and focus on driving business value is a formula for success.

So we can’t ignore the economics. Let’s move to the innovations.

The Innovations

The tech industry is in constant motion. Apps, infrastructure, and the delivery process drive each other forward together in a ping-pong fashion. Here are a few of the key areas to watch that are contributing to forward movement in the innovation cycle, as illustrated in the “Digital Trialectic”:

Depth of Services

The web is fundamentally changing how we deliver services. We’re moving towards an “everything-as-a-service” world where important bits of functionality can be consumed by simply calling an API. Programming is changing, and this is driven largely by the depth of available services to solve problems that once plagued developers working hours.

Twilio now removes the need for SMS, voice, and now email (acquired Sendgrid) code and infrastructure. Google’s Cloud Vision API removes the need for complex object and facial detection code and infrastructure. AWS’s Ground Station removes the need for satellite communications code and infrastructure (finally?), and Oracle’s Autonomous Database replaces your existing Oracle Database code and infrastructure.

Pizzas, weather, maps, automobile data, cats – you have an endless list of things accessible across simple API calls.

Open Source

As always, serverless innovation is happening in the world of open source as well, many of which end up as part of the list of services above. The Fn Project is fully open source code my team is working on which will allow anyone to run their own serverless infrastructure on any cloud, starting with Functions-as-a-service and moving towards things like workflow as well. Come say hi in our Slack.

But you can get to serverless faster with the managed Fn service, Oracle Functions. And there are other great industry efforts as well including Knative by Google, OpenFaas by Alex Ellis, and OpenWhisk by IBM.

All of these projects focus mostly on the compute aspect of a serverless architecture. There are many projects that aim to make other areas easier such as storage, networking, security, etc, and all will eventually have their own managed service counterparts to complete the picture. The options are a bit bewildering, which is where standards can help.

Standards

With a paradox of choice emerging in serverless, standards aim to ease the pain in providing common interfaces across projects, vendors, and services. The most active forum driving these standards is the Serverless Working Group, a subgroup of the Cloud Native Compute Foundation. Like cats and dogs living together, representatives from almost every major vendor and many notable startups and end users have been discussing how to “harmonize” the quickly-moving serverless space. CloudEvents has been the first major output from the group, and it’s a great one to watch. Join the group during the weekly meetings, or face-to-face at any of the upcoming KubeCon’s.

Expect workflow, function signatures, and other important aspects of serverless to come next. My hope is that the group can move quickly enough to keep up with the quickly-moving space and have a material impact on the future of serverless architectures, further increasing the focus on business value for developers at companies of all sizes.

A Final Word

We’re all guilty of skipping to the end in long posts. So here’s the net net: serverless is the next cycle of software architecture, its roots and learnings coming from best-of SOA and cloud. Its aim is to change the way in which software is produced by allowing developers to focus on business value, which in turn drives non-linear business value. The industry is moving quickly with innovation happening through the proliferation of services, open source, and ultimately standards to help harmonize this all together.

Like anything, the best way to get started is to just start. Pick your favorite cloud, and start using functions. You can either install Fn manually or sign up for early access to Oracle Functions.

If you don’t have an Oracle Cloud account, take a free trial today.

Oracle VM Server: Working with ovm cli

Dietrich Schroff - Fri, 2019-04-19 06:01
After getting the ovmcli run, here some commands which are quite helpful, when you are working with Oracle VM server.
But first:
Starting the ovmcli is done via
ssh admin@localhost -p 10000
at the OVM Manager.

After that you can get some overviews:
OVM> list server
Command: list server
Status: Success
Time: 2019-01-25 06:56:55,065 EST
Data:
  id:18:e2:a6:9d:5c:b6:48:3a:9b:d2:b0:0f:56:7e:ab:e9  name:oraclevm
OVM> list vm
Command: list vm
Status: Success
Time: 2019-01-25 06:56:57,357 EST
Data:
  id:0004fb0000060000fa3b1b883e717582  name:myAlpineLinux
OVM> list ServerPool
Command: list ServerPool
Status: Success
Time: 2019-01-25 06:57:12,165 EST
Data:
  id:0004fb0000020000fca85278d951ce27  name:MyServerPool
A complete list of all list commands can be obtained like this:
OVM> list ?
          AccessGroup
          AntiAffinityGroup
          Assembly
          AssemblyVirtualDisk
          AssemblyVm
          BondPort
          ControlDomain
          Cpu
          CpuCompatibilityGroup
          FileServer
          FileServerPlugin
          FileSystem
          Job
          Manager
          Network
          PeriodicTask
          PhysicalDisk
          Port
          Repository
          RepositoryExport
          Server
          ServerController
          ServerPool
          ServerPoolNetworkPolicy
          ServerUpdateGroup
          ServerUpdateRepository
          StorageArray
          StorageArrayPlugin
          StorageInitiator
          Tag
          VirtualAppliance
          VirtualApplianceVirtualDisk
          VirtualApplianceVm
          VirtualCdrom
          VirtualDisk
          VlanInterface
          Vm
          VmCloneCustomizer
          VmCloneNetworkMapping
          VmCloneStorageMapping
          VmDiskMapping
          Vnic
          VolumeGroup
An overview which kind of command can be used like list:
OVM> help
For Most Object Types:
    create [(attribute1)="value1"] ... [on ]
    delete
    edit   (attribute1)="value1" ...
    list
    show
For Most Object Types with Children:
    add to
    remove from
Client Session Commands:
    set alphabetizeAttributes=[Yes|No]
    set commandMode=[Asynchronous|Synchronous]
    set commandTimeout=[1-43200]
    set endLineChars=[CRLF,CR,LF]
    set outputMode=[Verbose,XML,Sparse]
    showclisession
Other Commands:
    exit
    showallcustomcmds
    showcustomcmds
    showobjtypes
    showversion
If you want to get you vm.cfg file, you can use the id from "list vm" and type:
OVM> getVmCfgFileContent Vm id=0004fb0000060000fa3b1b883e717582
Command: getVmCfgFileContent Vm id=0004fb0000060000fa3b1b883e717582
Status: Success
Time: 2019-01-25 06:59:46,875 EST
Data:
  OVM_domain_type = xen_pvm
  bootargs =
  disk = [file:/OVS/Repositories/0004fb0000030000dad74d9c43176d2e/ISOs/0004fb0000150000226a713414eaa501.iso,xvda:cdrom,r,file:/OVS/Repositories/0004fb0000030000dad74d9c43176d2e/VirtualDisks/0004fb0000120000f62a7bba83063840.img,xvdb,w]
  bootloader = /usr/bin/pygrub
  vcpus = 1
  memory = 512
  on_poweroff = destroy
  OVM_os_type = Other Linux
  on_crash = restart
  cpu_weight = 27500
  OVM_description =
  cpu_cap = 0
  on_reboot = restart
  OVM_simple_name = myAlpineLinux
  name = 0004fb0000060000fa3b1b883e717582
  maxvcpus = 1
  vfb = [type=vnc,vncunused=1,vnclisten=127.0.0.1,keymap=en-us]
  uuid = 0004fb00-0006-0000-fa3b-1b883e717582
  guest_os_type = linux
  OVM_cpu_compat_group =
  OVM_high_availability = false
  vif = []
Very helpful is the Oracle documentation (here).


Creating A Microservice With Micronaut, GORM And Oracle ATP

OTN TechBlog - Thu, 2019-04-18 12:56

Over the past year, the Micronaut framework has become extremely popular. And for good reason, too. It's a pretty revolutionary framework for the JVM world that uses compile time dependency injection and AOP that does not use any reflection. That means huge gains for your startup and runtime performance and memory consumption. But it's not enough to just be performant, a framework has to be easy to use and well documented. The good news is, Micronaut is both of these. And it's fun to use and works great with Groovy, Kotlin and GraalVM. In addition, the people behind Micronaut understand the direction that the industry is heading and have built the framework with that direction in mind. This means that things like Serverless and Cloud deployments are easy and there are features that provide direct support for them.  

In this post we'll look at how to create a Microservice with Micronaut which will expose a "Person" API. The service will utilize GORM which is a "data access toolkit" - a fancy way of saying it's a really easy way to work with databases (from traditional RDBMS to MongoDB, Neo4J and more). Specifically, we'll utilize GORM for Hibernate to interact with an Oracle Autonomous Transaction Processing DB. Here's what we'll be doing:

  1. Create the Micronaut application with Groovy support
  2. Configure the application to use GORM connected to an ATP database.
  3. Create a Person model
  4. Create a Person service to perform CRUD operations on the Person model
  5. Create a controller to interact with the Person service

First things first, make sure you have an Oracle ATP instance up and running. Luckily, that's really easy to do and this post by my boss Gerald Venzl will show you how to set up an ATP instance in less than 5 minutes. Once you have a running instance, grab a copy of your Client Credentials "Wallet" and unzip it somewhere on your local system.

Before we move on to the next step, create a new schema in your ATP instance and create a single table using the following DDL:

You're now ready to move on to the next step, creating the Micronaut application.

Create The Micronaut Application

If you've never used it before, you'll need to install Micronaut which includes a helpful CLI for scaffolding certain elements like the application itself and controllers, etc as you work with your application. Once you've confirmed the install, run the following command to generate your basic application:

Take a look inside that directory to see what the CLI has generated for you. 

As you can see, the CLI has generated a Gradle build script, a Dockerfile and some other config files as well as a `src` directory. That directory looks like this:

At this point you can import the application into your favorite IDE, so do that now. The next step is to generate a controller:

We'll make one small adjustment to the generated controller, so open it up and add the `@CompileStatic` annotation to the controller. It should like so once you're done:

Now run the application using `gradle run` (we can also use the Gradle wrapper with `./gradlew run`) and our application will start up and be available via the browser or a simple curl command to confirm that it's working.  You'll see the following in your console once the app is ready to go:

Give it a shot:

We aren't returning any content, but we can see the '200 OK' which means the application received the request and returned the appropriate response.

To make things easier for development and testing the app locally I like to create a custom Run/Debug configuration in my IDE (IntelliJ IDEA) and point it at a custom Gradle task. We'll need to pass in some System properties eventually, and this enables us to do that when launching from the IDE. Create a new task in `build.gradle` named `myTask` that looks like so:

Now create a custom Run/Debug configuration that points at this task and add the VM options that we'll need later on for the Oracle DB connection:

Here are the properties we'll need to populate for easier copy/pasting:

Let's move to the next step and get the application ready to talk to ATP!

Configure The Application For GORM and ATP

Before we can configure the application we need to make sure we have the Oracle JDBC drivers available. Download them, create a directory called `libs` in the root of your application and place them there.  Make sure that you have the following JARs in the `libs` directory:

Modify your `dependencies` block in your `build.gradle` file so that the Oracle JDB JARs and the `micronaut-hibernate-gorm` artifacts are included as dependencies:

Now let's modify the file located at `src/main/resources/application.yml` to configure the datasource and Hibernate.  

Our app is now ready to talk to ATP via GORM, so it's time to create a service, model and some controller methods! We'll start with the model.

Creating A Model

GORM models are super easy to work with.  They're just POGO's (Plain Old Groovy Objects) with some special annotations that help identify them as model entities and provide validation via the Bean Validation API. Let's create our `Person` model object by adding a Groovy class called 'Person.groovy' in a new directory called `model`.  Populate the model as such:

Take note of a few items here. We've annotated the class with @Entity (`grails.gorm.annotation.Entity`) so GORM knows that this is an entity it needs to manage. Our model has 3 properties: firstName, lastName and isCool. If you look back at the DDL we used to create the `person` table above you'll notice that we have two additional columns that aren't addressed in the model: ID and version. The ID column is implicit with a GORM entity and the version column is auto-managed by GORM to handle optimistic locking on entities. You'll also notice a few annotations on the properties which are used for data validation as we'll see later on.

We can start the application up again at this point and we'll see that GORM has identified our entity and Micronaut has configured the application for Hibernate:

Let's move on to creating a service.

Creating A Service

I'm not going to lie to you. If you're waiting for things to get difficult here, you're going to be disappointed. Creating the service that we're going to use to manage `Person` CRUD operations is really easy to do. Create a Groovy class called `PersonService` in a new directory called `service` and populate it with the following:

That's literally all it takes. This service is now ready to handle operations from our controller. GORM is smart enough to take the method signatures that we've provided here and implement the methods. The nice thing about using an abstract class approach (as opposed to using the interface approach) is that we can manually implement the methods ourselves if we have additional business logic that requires us to do so.

There's no need to restart the application here, as we've made no changes that would be visible at this point. We're going to need to modify our controller for that, so let's create one!

Creating A Controller

Lets modify the `PersonController` that we created earlier to give us some endpoints that we can use to do some persistence operations. First, we'll need to inject our PersonService into the controller.  This too is straightforward by simply including the following just inside our class declaration:

The first step in our controller should be a method to save a `Person`.  Let's add a method annotated with `@Post` to handle this and within the method we'll call the `PersonService.save()` method.  If things go well, we'll return the newly created `Person`, if not we'll return a list of validation errors. Note that Micronaut will bind the body of the HTTP request to the `person` argument of the controller method meaning that inside the method we'll have a fully populated `Person` bean to work with.

If we start up the application we are now able to persist a `Person` via the `/person/save` endpoint:

Note that we've received a 200 OK response here with an object containing our `Person`.  However, if we tried the operation with some invalid data, we'd receive some errors back:

Since our model (very strangely) indicated that the `Person` firstName must be between 5 and 50 characters we receive a 422 Unprocessable Entity response that contains an array of validation errors back with this response.

Now we'll add a `/list` endpoint that users can hit to list all of the Person objects stored in the ATP instance. We'll set it up with two optional parameters that can be used for pagination.

Remember that our `PersonService` had two signatures for the `findAll` method - one that accepted no parameters and another that accepted a `Map`.  The Map signature can be used to pass additional parameters like those used for pagination.  So calling `/person/list` without any parameters will give us all `Person` objects:

Or we can get a subset via the pagination params like so:

We can also add a `/person/get` endpoint to get a `Person` by ID:

And a `/person/delete` endpoint to delete a `Person`:

Summary

We've seen here that Micronaut is a simple but powerful way to create performant Microservice applications and that data persistence via Hibernate/GORM is easy to accomplish when using an Oracle ATP backend.  Your feedback is very important to me so please feel free to comment below or interact with me on Twitter (@recursivecodes).

If you'd like to take a look at this entire application you can view it or clone via Github.

Oracle ACEs at APEX Connect 2019, May 7-9 in Bonn

OTN TechBlog - Thu, 2019-04-18 11:36

APEX Connect 2019, the annual conference organized by DOAG (the German Oracle Applications User Group) will be held May 7-9, 2019 in Bonn, Germany. The event features a wide selection of sessions and events, covering APEX, PL and PL/SQL, and JavaScript.  Among the session speakers are the following members of the Oracle ACE Program:

Oracle ACE Director Nils de BruijinNiels de Bruijn
Business Unit Manager APEX, MT AG
Cologne, Germany

 

 

 

Oracle ACE Director Roel HartmanRoel Hartman
Director/Senior APEX Developer, APEX Consulting
Apeldoorn, Netherlands

 

 

Oracle ACE Director Heli HelskyahoHeli Helskyaho
CEO, Miracle Finland Oy
Finland

 

 

 

Oracle ACE Director John Edward ScottJohn Edward Scott
Founder, APEX Evangelists
West Yorkshire, United Kingdom

 

 

Oracle ACE Director Kamil StawiarskiKamil Stawiarski
Owner/Partner, ORA-600
Warsaw, Poland

 

 

Oracle ACE Director Martin WidlakeMartin Widlake
Database Architect and Performance Specialist, ORA600
Essex, United Kingdom

 

 

Oracle ACE Alan ArentsenAlan Arentsen
Senior Oracle Developer, Arentsen Database Consultancy
Breda, Netherlands

 

 

Oracle ACE Tobias ArnholdTobias Arnhold
Freelance APEX Developer, Tobias Arnhold IT Consulting
Germany

 

 

Oracle ACE Dietmar AustDietmar Aust
Owner, OPAL UG
Cologne, Germany

 

 

Oracle ACE Kai DonatoKai Donato
Senior Consultant for Oracle APEX Development, MT AG
Cologne, Germany

 

 

Oracle ACE Daniel HochleitnerDaniel Hochleitner
Freelance Oracle APEX Developer and Consultant
Regensburg, Germany

 

 

Oracle ACE Oliver LemmOliver Lemm
Business Unit Manager, MT AG
Cologne, Germany

 

 

Oracle ACE Richard MartensRichard Martens
Co-Owner, SMART4Solutions B.V.
Tilburg, Netherlands

 

 

Oracle ACE Robert MarzRobert Marz
Principal Technical Architect, its-people GmbH
Frankfurt, Germany

 

 

Oracle ACE Matt MulvaneyMatt Mulvaney
Senior Development Consultant, Explorer UK LTD
Leeds, United Kingdom

 

 

Oracle ACE Christian RokittaChristian Rokitta
Managing Partner, iAdvise
Breda, Netherlands

 

 

Oracle ACE Phillip SalvisbergPhilipp Salvisberg
Senior Principal Consultant, Trivadis AG
Zürich, Switzerland

 

 

Oracle ACE Sven-Uwe WellerSven-Uwe Weller
Syntegris Information Solutions GmbH
Germany

 

 

Oracle ACE Associate Carolin HagemannCarolin Hagemann
Hagemann IT Consulting
Hamburg, Germany

 

 

Oracle ACE Associate Moritz KleinMoritz Klein
Senior APEX Consultant, MT AG
Frankfurt, Germany

 

Additional Resources

Migrating Oracle Database & Non Oracle Database to Oracle Cloud

You can directly move / migrate various source databases into different target cloud deployments running the Oracle Cloud. Oracle automated tools for migration will move on premise database to the...

We share our skills to maximize your revenue!
Categories: DBA Blogs

CubeViewer - Process to Build the Cube Viewer

Anthony Shorten - Wed, 2019-04-17 18:32

As pointed out in the last post, the Cube Viewer is a new way of displaying data for advanced analysis. The Cube Viewer functionality extends the existing ConfigTools (a.k.a Task Optimization) objects to allow the analysis to be defined as a Cube Type and Cube View. Those definitions are used by the widget to display correctly and define what level of interactivity the user can enjoy.

Note: Cube Viewer is available in Oracle Utilities Application Framework V4.3.0.6.0 and above.

The process of building a cube introduces new concepts and new objects to ConfigTools to allow for an efficient method of defining the analysis and interactivity. In summary form the process is described by the figure below:

Cube View Process

  • Design Your Cube. Decide the data and related information to to be used in the Cube Viewer for analysis. This is not just a typical list of values but a design of dimensions, filters and values. This is an important step as it helps determine whether the Cube Viewer is appropriate for the data to be analyzed.
  • Design Cube SQL. Translating the design into a Cube based SQL. This SQL statement is formatted specifically for use in a cube.
  • Setup Query Zone. The SQL statement designed in the last step needs to be defined in a ConfigTools Query Zone for use in the Cube Type later in the process. This also allows for the configuration of additional information not contained in the SQL to be added to the Cube.
  • Setup Business Service. The Cube Viewer requires a Business Service based upon the standard FWLZDEXP application service. This is also used by the Cube Type later process.
  • Setup Cube Type. Define a Cube Type object defining the Query Zone, Business Service and other settings to be used by the Cube Viewer at runtime. This  brings all the configuration together into a new ConfigTools object.
  • Setup Cube View. Define an instance of the Cube Type with the relevant predefined settings for use in the user interface as a Cube View object. Appropriate users can use this as the initial view into the cube and use it as a basis for any Saved Views they want to implement.

Over the next few weeks, a number of articles will be available to outline each of these steps to help you understand the feature and be on your way to building your own cubes.

Oracle Database 19c download

Dietrich Schroff - Wed, 2019-04-17 15:20
In january 2019 Oracle released the documentation for Oracle Database 19c.

More than 7 weeks later there is still nothing at https://www.oracle.com/downloads/:


The gap between release date of the documentation and the on premises software was for 18c not so long...

Will 19c on premises software be released before may? Or later in summer?

ORA-22835: Buffer too small for CLOB to CHAR or BLOB to RAW conversion (actual: 6843, maximum: 2000)

Tom Kyte - Wed, 2019-04-17 13:26
HI iam using below query to read xml from blob and find the string but facing error buffer to small ora-22835 blob to raw conversion (actual 15569,mximum 2000) please help me out with below example <code> SELECT XMLTYPE (UTL_RAW.cast_to_varchar...
Categories: DBA Blogs

Merge two rows into one row

Tom Kyte - Wed, 2019-04-17 13:26
Hi Tom, I seek your help on how to compare two rows in the table and if they are same merge the rows. <code>create table test(id number, start_date date, end_date date, col1 varchar2(10), col2 varchar2(10), col3 varchar2(10)); insert into t...
Categories: DBA Blogs

Example of coe_xfr_sql_profile force_match TRUE

Bobby Durrett's DBA Blog - Wed, 2019-04-17 10:57

Monday, I used the coe_xfr_sql_profile.sql script from Oracle Support’s SQLT scripts to resolve a performance issue. I had to set the parameter force_match to TRUE so that the SQL Profile I created would apply to all SQL statements with the same FORCE_MATCHING_SIGNATURE value.

I just finished going off the on-call rotation at 8 am Monday and around 4 pm on Monday a coworker came up to me with a performance problem. A PeopleSoft Financials job was running longer than it normally did. Since it had run for several hours, I got an AWR report of the last hour and looked at the SQL ordered by Elapsed Time section and found a number of similar INSERT statements with different SQL_IDs.

The inserts were the same except for certain constant values. So, I used my fmsstat2.sql script with ss.sql_id = ’60dp9r760ja88′ to get the FORCE_MATCHING_SIGNATURE value for these inserts. Here is the output:

FORCE_MATCHING_SIGNATURE SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------------------ ------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
     5442820596869317879 60dp9r760ja88         3334601 15-APR-19 05.00.34.061 PM                1         224414.511     224412.713         2.982                  0                      0                   .376             5785269                 40                   3707

Now that I had the FORCE_MATCHING_SIGNATURE value 5442820596869317879 I reran fmsstat2.sql with ss.FORCE_MATCHING_SIGNATURE = 5442820596869317879 instead of ss.sql_id = ’60dp9r760ja88′ and got all of the insert statements and their PLAN_HASH_VALUE values. I needed these to use coe_xfr_sql_profile.sql to generate a script to create a SQL Profile to force a better plan onto the insert statements. Here is the beginning of the output of the fmsstat2.sql script:

FORCE_MATCHING_SIGNATURE SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------------------ ------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
     5442820596869317879 0yzz90wgcybuk      1314604389 14-APR-19 01.00.44.945 PM                1            558.798        558.258             0                  0                      0                      0               23571                  0                    812
     5442820596869317879 5a86b68g7714k      1314604389 14-APR-19 01.00.44.945 PM                1            571.158        571.158             0                  0                      0                      0               23245                  0                    681
     5442820596869317879 9u1a335s936z9      1314604389 14-APR-19 01.00.44.945 PM                1            536.886        536.886             0                  0                      0                      0               21851                  0                      2
     5442820596869317879 a922w6t6nt6ry      1314604389 14-APR-19 01.00.44.945 PM                1            607.943        607.943             0                  0                      0                      0               25948                  0                   1914
     5442820596869317879 d5cca46bzhdk3      1314604389 14-APR-19 01.00.44.945 PM                1            606.268         598.11             0                  0                      0                      0               25848                  0                   1763
     5442820596869317879 gwv75p0fyf9ys      1314604389 14-APR-19 01.00.44.945 PM                1            598.806        598.393             0                  0                      0                      0               24981                  0                   1525
     5442820596869317879 0u2rzwd08859s         3334601 15-APR-19 09.00.53.913 AM                1          18534.037      18531.635             0                  0                      0                      0              713757                  0                     59
     5442820596869317879 1spgv2h2sb8n5         3334601 15-APR-19 09.00.53.913 AM                1          30627.533      30627.533          .546                  0                      0                      0             1022484                 27                    487
     5442820596869317879 252dsf173mvc4         3334601 15-APR-19 09.00.53.913 AM                1          47872.361      47869.859          .085                  0                      0                      0             1457614                  2                    476
     5442820596869317879 25bw3269yx938         3334601 15-APR-19 09.00.53.913 AM                1         107915.183     107912.459         1.114                  0                      0                      0             2996363                 26                   2442
     5442820596869317879 2ktg1dvz8rndw         3334601 15-APR-19 09.00.53.913 AM                1          62178.512      62178.512          .077                  0                      0                      0             1789536                  3                   1111
     5442820596869317879 4500kk2dtkadn         3334601 15-APR-19 09.00.53.913 AM                1         106586.665     106586.665         7.624                  0                      0                      0             2894719                 20                   1660
     5442820596869317879 4jmj30ym5rrum         3334601 15-APR-19 09.00.53.913 AM                1          17638.067      17638.067             0                  0                      0                      0              699273                  0                    102
     5442820596869317879 657tp4jd07qn2         3334601 15-APR-19 09.00.53.913 AM                1          118948.54      118890.57             0                  0                      0                      0             3257090                  0                   2515
     5442820596869317879 6gpwwnbmch1nq         3334601 15-APR-19 09.00.53.913 AM                0          48685.816      48685.816          .487                  0                      0                  1.111             1433923                 12                      0
     5442820596869317879 6k1q5byga902a         3334601 15-APR-19 09.00.53.913 AM                1            2144.59        2144.59             0                  0                      0                      0              307369                  0                      2

The first few lines show the good plan that these inserts ran on earlier runs. The good plan has PLAN_HASH_VALUE 1314604389 and runs in about 600 milliseconds. The bad plan has PLAN_HASH_VALUE 3334601 and runs in 100 or so seconds. I took a look at the plans before doing the SQL Profile but did not really dig into why the plans changed. It was 4:30 pm or so and I was trying to get out the door since I was not on call and wanted to get home at a normal time and leave the problems to the on-call DBA. Here is the good plan:

Plan hash value: 1314604389

------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                |                    |       |       |  3090 (100)|          |
|   1 |  HASH JOIN RIGHT SEMI           |                    |  2311 |  3511K|  3090   (1)| 00:00:13 |
|   2 |   VIEW                          | VW_SQ_1            |   967 | 44482 |  1652   (1)| 00:00:07 |
|   3 |    HASH JOIN                    |                    |   967 | 52218 |  1652   (1)| 00:00:07 |
|   4 |     TABLE ACCESS FULL           | PS_PST_VCHR_TAO4   |    90 |  1980 |    92   (3)| 00:00:01 |
|   5 |     NESTED LOOPS                |                    | 77352 |  2417K|  1557   (1)| 00:00:07 |
|   6 |      INDEX UNIQUE SCAN          | PS_BUS_UNIT_TBL_GL |     1 |     5 |     0   (0)|          |
|   7 |      TABLE ACCESS BY INDEX ROWID| PS_DIST_LINE_TMP4  | 77352 |  2039K|  1557   (1)| 00:00:07 |
|   8 |       INDEX RANGE SCAN          | PS_DIST_LINE_TMP4  | 77352 |       |   756   (1)| 00:00:04 |
|   9 |   TABLE ACCESS BY INDEX ROWID   | PS_VCHR_TEMP_LN4   | 99664 |   143M|  1434   (1)| 00:00:06 |
|  10 |    INDEX RANGE SCAN             | PSAVCHR_TEMP_LN4   | 99664 |       |   630   (1)| 00:00:03 |
------------------------------------------------------------------------------------------------------

Here is the bad plan:

Plan hash value: 3334601

---------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                   |                    |       |       |  1819 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID       | PS_VCHR_TEMP_LN4   |  2926 |  4314K|  1814   (1)| 00:00:08 |
|   2 |   NESTED LOOPS                     |                    |  2926 |  4446K|  1819   (1)| 00:00:08 |
|   3 |    VIEW                            | VW_SQ_1            |     1 |    46 |     4   (0)| 00:00:01 |
|   4 |     SORT UNIQUE                    |                    |     1 |    51 |            |          |
|   5 |      TABLE ACCESS BY INDEX ROWID   | PS_PST_VCHR_TAO4   |     1 |    23 |     1   (0)| 00:00:01 |
|   6 |       NESTED LOOPS                 |                    |     1 |    51 |     4   (0)| 00:00:01 |
|   7 |        NESTED LOOPS                |                    |     1 |    28 |     3   (0)| 00:00:01 |
|   8 |         INDEX UNIQUE SCAN          | PS_BUS_UNIT_TBL_GL |     1 |     5 |     0   (0)|          |
|   9 |         TABLE ACCESS BY INDEX ROWID| PS_DIST_LINE_TMP4  |     1 |    23 |     3   (0)| 00:00:01 |
|  10 |          INDEX RANGE SCAN          | PS_DIST_LINE_TMP4  |     1 |       |     2   (0)| 00:00:01 |
|  11 |        INDEX RANGE SCAN            | PS_PST_VCHR_TAO4   |     1 |       |     1   (0)| 00:00:01 |
|  12 |    INDEX RANGE SCAN                | PSAVCHR_TEMP_LN4   |   126K|       |  1010   (1)| 00:00:05 |
---------------------------------------------------------------------------------------------------------

Notice that in the bad plan the Rows column has 1 in it on many of the lines, but in the good plan it has larger numbers. Something about the statistics and the values in the where clause caused the optimizer to build the bad plan as if no rows would be accessed from these tables even though many rows would be accessed. So, it made a plan based on wrong information. But I had no time to dig further. I did ask my coworker if anything had changed about this job and nothing had.

So, I created a SQL Profile script by going to the utl subdirectory under sqlt where it was installed on the database server. I generated the script by running coe_xfr_sql_profile gwv75p0fyf9ys 1314604389. I edited the created script by the name coe_xfr_sql_profile_gwv75p0fyf9ys_1314604389.sql and changed the setting force_match=>FALSE to force_match=>TRUE and ran the script. The long running job finished shortly thereafter, and no new incidents have occurred in future runs.

The only thing that confuses me is that when I run fmsstat2.sql now with ss.FORCE_MATCHING_SIGNATURE = 5442820596869317879 I do not see any runs with the good plan. Maybe future runs of the job have a different FORCE_MATCHING_SIGNATURE and the SQL Profile only helped the one job. If that is true, the future runs may have had the correct statistics and run the good plan on their own.

I wanted to post this to give an example of using force_match=>TRUE with coe_xfr_sql_profile. I had an earlier post about this subject, but I thought another example could not hurt. I also wanted to show how I use fmsstat2.sql to find multiple SQL statements by their FORCE_MATCHING_SIGNATURE value. I realize that SQL Profiles are a kind of band aid rather than a solution to the real problem. But I got out of the door by 5 pm on Monday and did not get woken up in the middle of the night so sometimes a quick fix is what you need.

Bobby

Categories: DBA Blogs

Developers Decide One Cloud Isn’t Enough

OTN TechBlog - Wed, 2019-04-17 08:00

Introduction

Developers have significantly greater choice today than even just a few years ago, when considering where to build, test and host their services and applications, deciding which clouds to move existing on-premises workloads to, and which of the multitude of open source projects to leverage. So why, in this new era of empowered developers and expanding choice, have so many organizations pursued a single cloud strategy?  The proliferation of new, cloud native open source projects and cloud service providers over recent years who have added capacity, functionality, tools, resources and services, has resulted in better performance, different cost models, and more choice for developers and DevOps engineers, while increasing competition among providers. This is leading into a new era of cloud choice, where the new norm will be dominated by a multi-cloud and hybrid cloud model.

As new cloud native design and development technologies like Kubernetes, serverless computing, and the maturing discipline of microservices emerge, they help accelerate, simplify, and expand deployment and development options. Users have the ability to leverage new technologies with their existing designs and deployments, and the flexibility they afford expands users’ option to run on many different platforms. Given this rapidly changing cloud landscape, it is not surprising that hybrid cloud and multi cloud strategies are being adopted by an increasing number of companies today. 

For a deeper dive into Prediction #7 of the 10 Predictions for Developers in 2019 offered by Siddhartha Agarwal, “Developers Decide One Cloud Isn’t Enough”, we look at the growing trend for companies and developers to choose more than one cloud provider. We’ll examine a few of the factors they consider, the needs determined by a company’s place in the development cycle, business objectives, and level of risk tolerance, and predict how certain choices will trend in 2019 and beyond.

 

Different Strokes

We are in a heterogeneous IT world today. A plethora of choice and use cases, coupled with widely varying technical and business needs and approaches to solving them, give rise to different solutions. No two are exactly the same, but development projects today typically fall within the following scenarios.

A. Born in the cloud development – these suffer little to no constraint imposed by existing applications; it is highly efficient and cost-effective to begin design in the cloud. They are naturally leveraging containers and new open source development tools like serverless (https://fnproject.io/) or service mesh platforms (e.g., Istio)  A decade ago, startup costs based on datacenter needs alone were a serious barrier to entry for budding tech companies – cloud computing has completely changed this.

B. On premises development moving to cloud – enterprises in this category have many more factors to consider. Java teams for example are rapidly adopting frameworks like Helidon and GraalVM to help them move to a microservice architecture and migrate applications to the cloud. But will greenfield development projects start only in cloud? Do they migrate legacy workloads to cloud? How do they balance existing investments with new opportunities? And what about the interface between on-premises and cloud?

C. Remaining mostly on premises but moving some services to cloud – options are expanding for those in this category. A hybrid cloud approach has been expanding, and we predict will continue to expand, over the course of at least the next few years.  The cloud native stacks available on premises now mirror the cloud native stacks in the cloud thus enabling a new generation of hybrid cloud use cases. An integrated and supported cloud native framework that spans on premises and cloud options delivers choice once again.  And, security, privacy and latency concerns will dictate some of their unique development project needs.

 

If It Ain’t Broke, Don’t Fix It?

IT investments are real. Inertia can be hard to overcome. Let’s look at the main reasons for not distributing workloads across multiple clouds.  

  • Economy of scale tops the list, as most cloud providers will offer discounts for customers who go all in; larger workloads on one cloud provide negotiating leverage.
  • Development staff familiarity with one chosen platform makes it easier to bring on and train new developers; ramp time to productivity increases.
  • Custom features or functionality unique to the main cloud provider may need to be removed or redesigned in moving to another platform. Even on supposedly open platforms, developers must be aware of the not-so-obvious features impacting portability.
  • Geographical location of datacenters for privacy and/or latency concerns in less well-served areas of the world may also inhibit choice, or force uncomfortable trade-offs.
  • Risk mitigation is another significant factor, as enterprises seek to balance conflicting business needs with associated risks. Lean development teams often need to choose between taking on new development work vs modernizing legacy applications, when resources are scarce.

Change is Gonna Do You Good

These are valid concerns, but as dev teams look more deeply into the robust services and offerings emerging today, the trend is to diversify.

The most frequently cited concern is that of vendor lock-in. This counter-argument to that of economy of scale says that the more difficult it is to move your workloads off of one provider, the less motivated that vendor is to help reduce your cost of operations. For SMBs (small to mid-sized businesses) without a ton of leverage in comparison to large enterprises, this can be significant. Ensuring portability of workloads is important. A comprehensive cloud native infrastructure is imperative here – one that includes container orchestration but also streaming, CI/CD, and observability and analysis (e.g, Prometheus and Grafana). Containers and Kubernetes deliver portability, provided your cloud vendor uses unmodified open source code. In this model, a developer can develop their web application on their laptop, push it into a CI/CD system on one cloud, and leverage another cloud for managed Kubernetes to run their container-based app. However, the minute you start using specific APIs from the underlying platform, moving to another platform is much more difficult. AWS Lambda is one of many examples.

Mergers, acquisitions, changing business plans or practices, or other unforeseen events may impact a business at a time when they are not equipped to deal with it. Having greater flexibility to move with changing circumstances, and not being rushed into decisions, is also important. Consider for example, the merger of an organization that uses an on-premises PaaS, such as OpenShift, merging with another organization that has leveraged the public cloud across IaaS, PaaS and SaaS. It’s important to choose interoperable technologies to anticipate these scenarios.

Availability is another reason cited by customers. A thoughtfully designed multi-cloud architecture not only offers potential negotiating power as mentioned above, but also allows for failover in case of outages, DDoS attacks, local catastrophes, and the like. Larger cloud providers with massive resources and proliferation of datacenters and multiple availability domains offer a clear advantage here, but it also behooves the consumer to distribute risk across not only datacenters, but over several providers.

Another important set of factors is related to cost and ROI. Running the same workload on multiple cloud providers to compare cost and performance can help achieve business goals, and also help inform design practices.  

Adopting open source technologies enables businesses to choose where to run their applications based on the criteria they deem most important, be they technical, cost, business, compliance, or regulatory concerns. Moving to open source thus opens up the possibility to run applications on any cloud. That is, any CNCF-certified Kubernetes managed cloud service can safely run Kubernetes – so enterprises can take advantage of this key benefit to drive a multi-cloud strategy.

The trend in 2019 is moving strongly in the direction of design practices that support all aspects of a business’s goals, with the best offers, pricing and practices from multiple providers. This direction makes enterprises more competitive – maximally productive, cost-effective, secure, available, and flexible regarding platform choice.

 

Design for Flexibility

Though having a multi-cloud strategy seems to be the growing trend, it does come with some inherent challenges. To address issues like interoperability among multiple providers and establishing depth of expertise with a single cloud provider, we’re seeing an increased use of different technologies that help to abstract away some of the infrastructure interoperability hiccups. This is particularly important to developers, who seek the best available technologies that fit their specific needs.

Serverless computing seeks to reduce the awareness of any notion of infrastructure. Consider it similar to water or electricity utilities – once you have attached your own minimal home infrastructure to the endpoint offered by the utility, you simply turn on the tap or light switch, and pay for what you consume. The service scales automatically – for all intents and purposes, you may consume as much output of the utility or service as desired, and the bill goes up and down accordingly. When you are not consuming the service, there is no (or almost no) overhead.  

Development teams are picking cloud vendors based on capabilities they need. This is especially true in SaaS. SaaS is a cloud-based software delivery model with payment based on usage, rather than license or support-based pricing. The SaaS provider develops, maintains and updates the software, along with the hardware, middleware, application software, and security. SaaS customers can more easily predict total cost of ownership with greater accuracy. The more modern, complete SaaS solutions also allow for greater ease of configuration and personalization, and offer embedded analytics, data portability, cloud security, support for emerging technologies, and connected, end-to-end business processes.

Serverless computing not only provides simplicity through abstraction of infrastructure, its design patterns also promote the use of third-party managed services whenever possible. This provides flexibility and allows you to choose the best solution for your problem from the growing suite of products and services available in the cloud, from software-defined networking and API gateways, to databases and managed streaming services. In this design paradigm, everything within an application that is not purely business logic can be efficiently outsourced.

More and more companies are finding it increasingly easy to connect elements together with Serverless functionality for the desired business logic and design goals. Serverless deployments talking to multiple endpoints can run almost anywhere; serverless becomes the “glue” that is used to make use of the best services available, from any provider.

Serverless deployments can be run anywhere, even on multiple cloud platforms. Hence flexibility of choice expands even further, making it arguably the best design option for those desiring portability and openness.

 

Summary

There are many pieces required to deliver a successful multi-cloud approach. Modern developers use specific criteria to validate if a particular cloud is “open” and whether or not it supports a multi-cloud approach. Does it have the ability to

  • extract/export data without incurring significant expense or overhead?
  • be deployed either on-premises or in the public cloud, including for custom applications, integrations between applications, etc.?
  • monitor and manage applications that might reside on-premises or in other clouds from a single console, with the ability to aggregate monitoring/management data?

And does it have a good set of APIs that enables access to everything in the UI via an API? Does it expose all the business logic and data required by the application? Does it have SSO capability across applications?

The CNCF (Cloud Native Computing Foundation) has over 400 cloud provider, user, and supporter members, and its working groups and cloud events specification engage these and thousands more in the ongoing mission to make cloud native computing ubiquitous, and allow engineers to make high-impact changes frequently and predictably with minimal toil.

We predict this trend will continue well beyond 2019 as CNCF drives adoption of this paradigm by fostering and sustaining an ecosystem of open source, vendor-neutral projects, and democratizing state-of-the-art patterns to make these innovations accessible for everyone.

Oracle is a platinum member of CNCF, along with 17 other major cloud providers. We are serious about our commitment to open source, open development practices, and sharing our expertise via technical tutorials, talks at meetups and conferences, and helping businesses succeed. Learn more and engage with us at cloudnative.oracle.com, and we’d love to hear if you agree with the predictions expressed in this post. 

Leading Pharmacy Extends 100 Year Legacy with Oracle

Oracle Press Releases - Wed, 2019-04-17 07:00
Press Release
Leading Pharmacy Extends 100 Year Legacy with Oracle Farmatodo expands operations in new countries with modern retail technology

REDWOOD SHORES, Calif. and CARACAS, Venezuela—Apr 17, 2019

Farmatodo, a leading Venezuelan self-service chain of pharmacies, has specialized in providing medicine, personal care, beauty and baby products to help consumers care for themselves and their families for more than 100 years. Through a seamless shopping experience, the company offers approximately 8000 products in more than 200 stores and online in Venezuela and Colombia. With Oracle Retail, Farmatodo has established a framework to expand into new countries, deploy new stores faster, and gained the agility to serve in-store shoppers better with a modern point of service (POS) system.

In addition, this new technology will support Farmatodo’s aggressive delivery model in Colombia. While the area is known to have challenging traffic congestion, the pharmacy offers home delivery in up to 30 minutes. To help fulfill this promise, having the real-time inventory visibility and store consistency Oracle provides is critical.

“The continuity and expansion of our retail operation depended on reducing technological risks and improving information integrity and business processes. We replaced outdated legacy systems with Oracle to create a foundation for growth in Latin America,” said Angelo Cirillo, chief information officer, Farmatodo. “Usually these projects take three years for only one country. Leveraging Oracle’s best practices and integrated solutions, we fast-tracked the implementation of two countries in two years.”

The company relies on Oracle Retail Merchandising System, Oracle Retail Store Inventory Management and Oracle Retail Warehouse Management System to manage the business at a corporate level and Oracle Retail Xstore Point-of-Service to enhance the consumer experience on the store floors.

Farmatodo selected Oracle PartnerNetwork (OPN) Platinum level member, Retail Consult to implement the latest versions of the solutions. A longtime collaborator, Retail Consult has a deep understanding of Oracle technology, retail process, and customers. The company employed a multifunctional team with a strong customer-centric approach, along with the Oracle Retail Reference Model, to chart a path to success for Farmatodo.

“To support international expansion, we faced a technological and corporate challenge. The previous experience with Oracle Retail system allowed us to fully evaluate and emulate features and functionalities before extending them throughout new and existing operations,” said Francisco Gerardo Díaz Parra, project director, Farmatodo. “The stability and data security provided by Oracle, combined with the highly skilled implementation partner and defined project governance brought us the optimal mix to integrate processes and modernize systems.”

“For thirty-plus years, we have been working hand-in-hand with global retailers to help ensure successful implementations and outcomes. The power of this combined knowledge continues to be central in delivering unmatched industry best practices and guiding innovations that are enabled by our modern platform. Our goal is to help our customers keep pace with the changes in consumer behavior and to enable them with operational agility and a clear view into their operations so they can move at the same speed,” said Mike Webster, senior vice president, and general manager, Oracle Retail. 

Contact Info
Kris Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Retail Consult

Retail Consult is a highly specialized group that has a big focus on technology solutions for retail, offering clients global perspective and experience with operations in Europe, North, South and Central America. The most senior resources average 15 years of retail experience, and the multilingual team integrates retail-specific skills in strategy, technology architecture, business process, change management, support, and management.

About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility and refine the customer experience. For more information, visit our website www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kris Reeves

  • +1.925.787.6744

Westchester Community College Uses Oracle Cloud to Modernizes Education Experience

Oracle Press Releases - Wed, 2019-04-17 07:00
Press Release
Westchester Community College Uses Oracle Cloud to Modernizes Education Experience Community college deploys Oracle Student Cloud to recruit and engage students across an expanding portfolio of learning programs

Redwood Shores, Calif.—Apr 17, 2019

Westchester Community College  is implementing Oracle Student Cloud solutions to support its goal of providing accessible, high-quality and affordable education to its diverse community. The two-year public college is affiliated with the State University of New York, the nation’s largest comprehensive public university system.

To keep pace with fast-changing workforce requirements and student expectations, institutions  such as Westchester Community College are evolving to improve student outcomes and operational efficiency. This change demands both a new model for teaching, learning and research, as well as better ways to recruit, engage and manage students throughout their lifelong learning experience.    

“We are committed to student success, academic excellence, and workforce and economic development. To deliver on those promises we needed to leverage the best technology to modernize our operations and how we engage with our students,” said Dr. Belinda Miles, president of Westchester Community College, Valhalla, N.Y. “By expanding our Oracle footprint  with Oracle Student Cloud we will be able to support a diverse array of academic programs and learning opportunitites including continuing education, while delivering better experiences to our students.”

Oracle Student Cloud solutions, including Student Management and Recruiting, will integrate seamlessly with Westchester’s existing Oracle Campus student information system. With Oracle Student Management, the school will be able to better inform existing and prospective students about classes and services, and Oracle Student Recruiting will improve and simplify the student recruitment process. The college will also be using Oracle Student Engagement to better communicate with and engage current and prospective students.

“Oracle Student Cloud enables organizations such as Westchester to promote an increasingly diverse array of academic programs for successful life-long learning.” said Vivian Wong, GVP higher education development, Oracle. “We are delighted to partner with Westchester on their cloud transformation journey.”

Supporting the entire student life cycle, Oracle Student Cloud is a complete suite of higher education cloud solutions, including Student Management, Student Recruiting, Student Engagement, and Student Financial Planning. As a set of modules, designed to work as a suite, institutions are able to choose their own incremental path to the cloud.     

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.925.787.6744

Podcast: On the Highway to Helidon

OTN TechBlog - Tue, 2019-04-16 23:00

Are you familiar with Project Helidon? It’s an open source Java microservices framework introduced by Oracle in September of 2018.  As Helidon project lead Dmitry Kornilov explains in his article Helidon Takes Flight, "It’s possible to build microservices using Java EE, but it’s better to have a framework designed from the ground up for building microservices."

Helidon consists of a lightweight set of libraries that require no application server and can be used in Java SE applications. While these libraries can be used separately, using them in combination provides developers with a solid foundation on which to build microservices.

In this program we’ll dig into Project Helidon with a panel that consists of two people who are actively engaged in the project, and two community leaders who have used Helidon in development projects, and have also organized Helidon-focused Meet-Ups.

This program was recorded on Friday, March 8, 2019. So let’s journey through time and space and get to the conversation. Just press play in the widget.

The Panelists Dmitry Kornilov

Dmitry Kornilov
Senior Software Development Manager, Oracle; Project Lead, Project Helidon
Prague, Czech Republic

 

Tomas Langer

Tomas Langer
Consulting Member of Technical Staff, Oracle; Member of the Project Helidon Team
Prague, Czech Republic

 

Oracle ACE Associate José Rodrigues

José Rodrigues
Principal Consultant and Business Analyst, Link Consulting; Co-Organizer, Oracle Developer Meetup Lisbon
Lisbon, Portugal

 

Oracle ACE Phil Wilkins

Phil Wilkins
Senior Consultant, Capgemini; Co-Organizer. Oracle Developer Meetup London
Reading, UK

 

Relevant Resources

Help with v$statname and v$sysstat

Tom Kyte - Tue, 2019-04-16 19:06
Tom, Can you please provide info on how can I find the full table scan and index table scan activities in the database using v$statname and v$sysstat? Do I need to set TIMED_STATISTICS=TRUE before running queries against v$sysstat?...
Categories: DBA Blogs

18c Upgrade: Failed gridSetup.sh -executeConfigTools: Cluster upgrade state is [UPGRADE FINAL]

Michael Dinh - Tue, 2019-04-16 16:53

Check 18c Upgrade Results – The cluster upgrade state is [UPGRADE FINAL]

18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

This is a multi-part series for 18c Upgrade and suggest read the above 2 posts first.

Commands for gridSetup.sh

+ /u01/18.3.0.0/grid/gridSetup.sh -silent -skipPrereqs -applyRU /media/patch/Jan2019/28828717 -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false
Preparing the home to patch...
Applying the patch /media/patch/Jan2019/28828717...
Successfully applied the patch.
The log can be found at: /u01/app/oraInventory/logs/GridSetupActions2019-04-16_06-19-12AM/installerPatchActions_2019-04-16_06-19-12AM.log
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
 /u01/18.3.0.0/grid/install/response/grid_2019-04-16_06-19-12AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-04-16_06-19-12AM/gridSetupActions2019-04-16_06-19-12AM.log

As a root user, execute the following script(s):
        1. /u01/18.3.0.0/grid/rootupgrade.sh

Execute /u01/18.3.0.0/grid/rootupgrade.sh on the following nodes:
[racnode-dc1-1, racnode-dc1-2]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes, except a node you designate as the last node. When all the nodes except the last node are done successfully, run the script on the last node.

Successfully Setup Software.
As install user, execute the following command to complete the configuration.
        /u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp [-silent]


+ exit
oracle@racnode-dc1-1::/home/oracle
$

Basically, the error provided is utterly useless.

oracle@racnode-dc1-1::/home/oracle
$ /u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

[WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
oracle@racnode-dc1-1::/home/oracle

Check logs from directory /u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ cd /u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ ls -alrt
total 1072
-rw-r----- 1 oracle oinstall     130 Apr 16 12:59 installerPatchActions_2019-04-16_12-59-56PM.log
-rw-r----- 1 oracle oinstall       0 Apr 16 12:59 gridSetupActions2019-04-16_12-59-56PM.err
drwxrwx--- 8 oracle oinstall    4096 Apr 16 13:01 ..
-rw-r----- 1 oracle oinstall 1004378 Apr 16 13:01 gridSetupActions2019-04-16_12-59-56PM.out
-rw-r----- 1 oracle oinstall    2172 Apr 16 13:01 time2019-04-16_12-59-56PM.log ***
-rw-r----- 1 oracle oinstall   73047 Apr 16 13:01 gridSetupActions2019-04-16_12-59-56PM.log ***
drwxrwx--- 2 oracle oinstall    4096 Apr 16 13:01 .

Check time2019-04-16_12-59-56PM.log

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ cat time2019-04-16_12-59-56PM.log
 # Message # ElapsedTime # Current Time ( ms )
 # Starting step:INITIALIZE_ACTION of state:init #  0  # 1555412405106
 # Finished step:INITIALIZE_ACTION of state:init # 1 # 1555412405106
 # Starting step:EXECUTE of state:init #  0  # 1555412405108
 # Finished step:EXECUTE of state:init # 3 # 1555412405111
 # Starting step:VALIDATE of state:init #  0  # 1555412405113
 # Finished step:VALIDATE of state:init # 2 # 1555412405115
 # Starting step:TRANSITION of state:init #  0  # 1555412405115
 # Finished step:TRANSITION of state:init # 2 # 1555412405117
 # Starting step:EXECUTE of state:CRSConfigTools #  0  # 1555412405117
 # Finished step:EXECUTE of state:CRSConfigTools # 813 # 1555412405930
 # Starting step:VALIDATE of state:CRSConfigTools #  0  # 1555412405930
 # Finished step:VALIDATE of state:CRSConfigTools # 0 # 1555412405930
 # Starting step:TRANSITION of state:CRSConfigTools #  0  # 1555412405930
 # Finished step:TRANSITION of state:CRSConfigTools # 26591 # 1555412432521
 # Starting step:INITIALIZE_ACTION of state:setup #  0  # 1555412432521
 # Finished step:INITIALIZE_ACTION of state:setup # 0 # 1555412432521
 # Starting step:EXECUTE of state:setup #  0  # 1555412432522
 # Finished step:EXECUTE of state:setup # 6 # 1555412432528
 # Configuration in progress. #  0  # 1555412436788
 # Update Inventory in progress. #  0  # 1555412437768
 # Update Inventory successful. # 52612 # 1555412490380
 # Upgrading RHP Repository in progress. #  0  # 1555412490445

================================================================================
 # Upgrading RHP Repository failed. # 12668 # 1555412503112
================================================================================

 # Starting step:VALIDATE of state:setup #  0  # 1555412503215
 # Finished step:VALIDATE of state:setup # 15 # 1555412503230
 # Starting step:TRANSITION of state:setup #  0  # 1555412503230
 # Finished step:TRANSITION of state:setup # 0 # 1555412503230
 # Starting step:EXECUTE of state:finish #  0  # 1555412503230
 # Finished step:EXECUTE of state:finish # 6 # 1555412503236
 # Starting step:VALIDATE of state:finish #  0  # 1555412503237
 # Finished step:VALIDATE of state:finish # 1 # 1555412503238
 # Starting step:TRANSITION of state:finish #  0  # 1555412503238
 # Finished step:TRANSITION of state:finish # 0 # 1555412503238

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

Check gridSetupActions2019-04-16_12-59-56PM.log

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ grep -B2 -A100 'Executing RHPUPGRADE' gridSetupActions2019-04-16_12-59-56PM.log
INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
INFO:  [Apr 16, 2019 1:01:30 PM] Executing RHPUPGRADE
INFO:  [Apr 16, 2019 1:01:30 PM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn.handleProcess() entered.
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: getting configAssistantParmas.
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: checking secretArguments.
INFO:  [Apr 16, 2019 1:01:30 PM] No arguments to pass to stdin
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: starting read loop.
INFO:  [Apr 16, 2019 1:01:43 PM] Completed Plugin named: rhpupgrade
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.saveSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.executeSelectedToolsInAggregate action performed
INFO:  [Apr 16, 2019 1:01:43 PM] Exiting ConfigClient.executeSelectedToolsInAggregate method
INFO:  [Apr 16, 2019 1:01:43 PM] Adding ExitStatus SUCCESS_MINUS_RECTOOL to the exit status set
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.saveSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Calling event ConfigSessionEnding
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.endSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Completed Configuration
INFO:  [Apr 16, 2019 1:01:43 PM] Adding ExitStatus FAILURE to the exit status set
INFO:  [Apr 16, 2019 1:01:43 PM] All forked task are completed at state setup
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Validating state <setup>

================================================================================
WARNING:  [Apr 16, 2019 1:01:43 PM] [WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
================================================================================

INFO:  [Apr 16, 2019 1:01:43 PM] Advice is CONTINUE
INFO:  [Apr 16, 2019 1:01:43 PM] Completed validating state <setup>
INFO:  [Apr 16, 2019 1:01:43 PM] Verifying route success
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Executing action at state finish
INFO:  [Apr 16, 2019 1:01:43 PM] FinishAction Actions.execute called
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application
INFO:  [Apr 16, 2019 1:01:43 PM] Completed executing action at state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Moved to state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Validating state <finish>
WARNING:  [Apr 16, 2019 1:01:43 PM] Validation disabled for the state finish
INFO:  [Apr 16, 2019 1:01:43 PM] Completed validating state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Terminating all background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Terminated all background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Successfully executed the flow in SILENT mode
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application
INFO:  [Apr 16, 2019 1:01:43 PM] inventory location is/u01/app/oraInventory
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application

================================================================================
INFO:  [Apr 16, 2019 1:01:43 PM] Exit Status is -1
INFO:  [Apr 16, 2019 1:01:43 PM] Shutdown Oracle Grid Infrastructure 18c Installer
INFO:  [Apr 16, 2019 1:01:43 PM] Unloading Setup Driver
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$

Due to Exit Status is -1 is probably why – The cluster upgrade state is [UPGRADE FINAL]

Why Upgrading RHP Repository when oracle_install_crs_ConfigureRHPS=false?

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ grep -i rhp *
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:04 PM] Setting value for the property:oracle_install_crs_ConfigureRHPS in the bean:CRSInstallSettings
gridSetupActions2019-04-16_12-59-56PM.log: oracle_install_crs_ConfigureRHPS                       false
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:37 PM] Created config job for rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:37 PM] Selecting job named 'Upgrading RHP Repository' for retry
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Started Plugin named: rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Executing RHPUPGRADE
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Completed Plugin named: rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
time2019-04-16_12-59-56PM.log: # Upgrading RHP Repository in progress. #  0  # 1555412490445
time2019-04-16_12-59-56PM.log: # Upgrading RHP Repository failed. # 12668 # 1555412503112
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$

gridsetup_upgrade.rsp is used for upgrade and pertinent info shown.

## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                        ##

#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=STANDALONE 

#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=false

oracle@racnode-dc1-1::/sf_OracleSoftware/18cLinux
$ sdiff -iEZbWBst -w 150 gridsetup.rsp gridsetup_upgrade.rsp
INVENTORY_LOCATION=                                                       |  INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                                    |  oracle.install.option=UPGRADE
ORACLE_BASE=                                                              |  ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=                                       |  oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=                           |  oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=                     |  oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS=                              |  oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=                    |  oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.asm.configureGIMRDataDG=                                   |  oracle.install.asm.configureGIMRDataDG=false
oracle.install.asm.configureAFD=                                          |  oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=                                         |  oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=                                |  oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=                                   |  oracle.install.config.managementOption=NONE
oracle.install.crs.rootconfig.executeRootScript=                          |  oracle.install.crs.rootconfig.executeRootScript=false

ora.cvu does not report any errors.

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ crsctl stat res -w "TYPE = ora.cvu.type" -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
oracle@racnode-dc1-1:+ASM1:/home/oracle
$ crsctl stat res -w "TYPE = ora.cvu.type" -p|grep RESULTS | sed 's/,/\n/g'
CHECK_RESULTS=
oracle@racnode-dc1-1:+ASM1:/home/oracle
$

Run rhprepos upgradeSchema -fromversion 12.1.0.2.0 – FAILED.

oracle@racnode-dc1-1::/home/oracle
$ /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
PRCT-1474 : failed to run 'mgmtca' on node racnode-dc1-2.

oracle@racnode-dc1-1::/home/oracle
$ ps -ef|grep pmon
oracle    9722  4804  0 19:37 pts/0    00:00:00 grep --color=auto pmon
oracle   10380     1  0 13:46 ?        00:00:01 asm_pmon_+ASM1
oracle   10974     1  0 13:46 ?        00:00:01 apx_pmon_+APX1
oracle   11218     1  0 13:47 ?        00:00:02 ora_pmon_hawk1
oracle@racnode-dc1-1::/home/oracle
$ ssh racnode-dc1-2
Last login: Tue Apr 16 18:44:30 2019

----------------------------------------
Welcome to racnode-dc1-2
OracleLinux 7.3 x86_64

FQDN: racnode-dc1-2.internal.lab
IP:   10.0.2.15

Processor: Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
#CPU's:    2
Memory:    5709 MB
Kernel:    4.1.12-61.1.18.el7uek.x86_64

----------------------------------------

oracle@racnode-dc1-2::/home/oracle
$ ps -ef|grep pmon
oracle    9219     1  0 13:44 ?        00:00:01 asm_pmon_+ASM2
oracle   10113     1  0 13:45 ?        00:00:01 apx_pmon_+APX2
oracle   10619     1  0 13:45 ?        00:00:01 ora_pmon_hawk2
oracle   13200 13178  0 19:37 pts/0    00:00:00 grep --color=auto pmon
oracle@racnode-dc1-2::/home/oracle
$

In conclusion, the silent upgrade process is poorly documented at best.

Starting to wondering if the following parameters contributed to the issue:

-J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false

Check your hints carefully

Bobby Durrett's DBA Blog - Tue, 2019-04-16 16:32

Back in 2017 I wrote about how I had to disable the result cache after upgrading a database to 11.2.0.4. This week I found one of our top queries and it looked like removing the result cache hints made it run 10 times faster. But this did not make sense because I disabled the result cache. Then I examined the hints closer. They looked like this:

/*+ RESULT CACHE */

There should be an underscore between the two words. I look up hints in the manuals and found that CACHE is a real hint. So, I tried the query with these three additional combinations:

 
/*+ RESULT */
 
/*+ CACHE */
 
/*+ RESULT_CACHE */

It ran slow with the original hint and with just the CACHE hint but none of the others. So, the moral of the story is to check your hints carefully because they may not be what you think they are.

Bobby

Categories: DBA Blogs

Spring Cleaning: 5 Ways to Digital Declutter and Stay Proactive

Chris Warticki - Tue, 2019-04-16 15:13
Your Digital Spring Cleaning Tips

It’s that time of year—spring cleaning is here. Time to dust off your IT strategy and make sure your security, software, systems, and support are ready for the season ahead. Follow these easy, actionable tips to ensure your business is prepared, cyber safe, protected, and compliant.

 

Tip 1: Commit to Continuous Innovation

Now is the time to ensure you are maximizing the value of your Oracle investment and are ready for the future. If you haven’t already done so, explore Oracle Applications Unlimited and Oracle Premier Support offerings for your covered on-premises applications, including Oracle E-Business Suite, JD Edwards EnterpriseOne, PeopleSoft, and Siebel, and take advantage of the Lifetime Support Policy for your Applications Unlimited products and the latest product roadmaps.

Tip 2: Update Your Software

Outdated software is as problematic as having no security defense at all. Ensure your Oracle software is up to date to reduce risk of cyber threats. Visit My Oracle Support to get the latest updates and details.

Looking for streamlined and more efficient upgrades with shorter cycles? Discover how you can receive continuous innovation releases for your covered Oracle Applications[1] with Oracle Applications Unlimited.

Tip 3: Patch, Patch, Patch

Security patching is essential for securing enterprise software and must be a core part of your security strategy. Failure to patch your software at the source leaves your software open to attack and your business open to risk.

According to the U.S. Department of Homeland Security, “it is necessary for all organizations to establish a strong ongoing patch management process to ensure the proper preventive measures are taken against potential threats.”

Learn more about the importance of cybersecurity (PDF) and how Oracle Support can help protect your business from cyber threats.

Tip 4: Do a Compliance Check

Many new privacy regulations are being enacted to protect businesses everywhere, such as tax, legal, and the General Data Protection Regulation (GDPR). Learn how Oracle can help you on the road to compliance.

Tip 5: Polish Up Your Skills and Expertise

Take Oracle Support Accreditation learning paths to get support best practices and tips directly from Oracle product experts. By completing the accreditation learning series, you can increase your proficiency with My Oracle Support’s core functions and build skills to help you leverage Oracle solutions, tools, and knowledge that enable productivity. Get more insights into the benefits of getting accredited.

 

Whether it’s spring or any other season, we encourage customers to learn more about the fundamentals of protecting their businesses with a trusted partner who both understands the importance of security and delivers ongoing and unparalleled product innovation. Visit our Oracle Premier Support website to learn more.

Learn More:

 

[1] Covered Oracle Applications include PeopleSoft, Oracle E-Business Suite, JD Edwards EnterpriseOne, and Siebel, excluding specified individual products that Oracle will not extend support for beyond the already committed dates.

Latest Blog Posts from Oracle ACEs: April 7-13

OTN TechBlog - Tue, 2019-04-16 14:00

Busy as bees, these ACEs have been, keeping the buzz going with another week's worth of posts offering the kind of technical experience and expertise that can help to keep you from getting stung on your next project.

Oracle ACE Director Franck PanchotFrank Panchot
Data Engineer, CERN
Lausanne, Switzerland

 

 

 

Oracle ACE Jhonata LamimJhonata Lamim
Senior Oracle Consultant, Exímio Soluções em TI
Santa Catarina, Brazil

 

 

Oracle ACE Marco MischkeMarco Mischke
Team Lead, Database Projects, Robotron Datenbank-Software GmbH
Dresden, Germany

 

 

Oracle ACE Noriyoshi ShinodaNoriyoshi Shinoda
Database Consultant, Hewlett Packard Enterprise
Tokyo, Japan

 

 

Oracle ACE Paul GuerinPaul Guerin
Database Service Delivery Leader, Hewlett-Packard
Philippines
Twitter LinkedIn
 

 

Oracle ACE Ricardo GiampaoliRicardo Giampaoli
EPM Architect Consultant, The Hackett Group
Leinster, Ireland

 

 

Oracle ACE Rodrigo DeSouzaRodrigo de Souza
Solutions Architect, Innive Inc
Rio Grande do Sul, Brazil

 

 

Oracle ACE Sean StuberSean Stuber
Database Analyst, American Electric Power
Columbus, Ohio

 

 

Oracle ACE Stefan KoehletStefan Koehler
Independent Oracle Performance Consultant and Researcher
Bavaria, Germany

 

 

 

Oracle ACE Yong JingYong Jing
System Architect Manager, Changde Municipal Human Resources and Social Security Bureau
Changde City, China

 

 

Oracle ACE Associate Emad Al-MousaEmad Al-Mousa
Senior IT Consultant, Saudi Aramco
Saudi Arabia

 

 

Oracle ACE Eugene FedorenkoEugene Fedorenko
Senior Architect, Flexagon
De Pere, Wisconsin

 

 

Related Resources Oracle ACE Guide

Oracle ACE Director Oracle ACE Director: Top-tier community members who engage more closely with Oracle.
Oracle ACE Oracle ACE: Established Oracle advocates who are well known in the community.
Oracle ACE Associate Oracle ACE Associate: Entry point for the Oracle ACE program

Git Branch Protection in Oracle Developer Cloud

OTN TechBlog - Tue, 2019-04-16 11:26

In the April release of Oracle Developer Cloud, we introduced a feature you can use to protect a specific branch of a Git repository hosted by Oracle Developer Cloud. This blog should help you understand the options we introduced.

Who has access to branch protection?

The only one allowed to configure branch protection for a Git repository is the user with the Project Owner role for the project in which the Git repository was created.

Where can we find the branch protection option?

To access this feature, select the Project Administration tab on the left navigation bar and then select the Branches tile in Developer Cloud.  This feature is accessible to a Project Owner, not to a Project Member.

 

Branch Protection Settings – Getting Started

To get started with setting branch protections, select the Git repository and the branch in the Branches tab. The dropdown lists all the repositories in the project and all the branches created for the selected repository.  In the following screenshot, I selected the NodeJSMicroService.git repository and the master branch.

 

Branch Protection – Options

There are four options for branch protection:

  • Open
  • Requires Review
  • Private
  • Frozen

By default, every branch of every Git repository is Open.

 

Branch Protection Options – Details

Open

By default, any branch of a given Git repository has a branch protection type of Open. This means there are no restrictions on the branch. You can still impose two rules without imposing code merge rules by selecting one or both of the following checkboxes:

Do not allow forced pushes: Select this option to ensure that, if there are any merge conflicts, no code can be pushed to the branch using the force push provision in Git.

Do not allow renaming or deleting the branch: Select this option to ensure that nobody can rename or delete the branch. The branch cannot be deleted manually or as part of the merge request.

You can save the configuration by clicking the Save button or discard it by clicking the Discard button.

 

Requires Review

If the Project Owner opts for this branch protection option, the code will be reviewed and approved by the configured reviewer for any push or code merge to take place. This is very useful when it is used with the master branch, to avoid any direct push or code merge to it without prior review. You can configure the reviewer(s) who are part of the project and set the criteria for approval. The Criteria for approval dropdown lets you configure whether an approval is required from all the configured reviewers, just one reviewer, or any two of them.

In addition to the review criteria, there are few other checkboxes that can help provide you with more comprehensive coverage as part of this protection option.

Requires Successful Build:  Select this checkbox to ensure that the review branch to be merged with the selected branch had a successful last build.

Reapproval needed when the branch is updated: Select this checkbox to ensure that, if a change is pushed to a branch after some reviewers have approved the merge request, the merge will only happen after the reviewers reapprove the merge request.

Changes pushed to the target branch must match review content: Select this checkbox to ensure that the reviewed code and the merged code are one and the same.

You can save the configuration by clicking the Save button or discard it by clicking the Discard button.

Private

This branch protection option ensures that only user(s) who have been configured or designated as branch owners will be able to push the code to the branch directly. All other users will have to create a merge request to push their code to this branch. This option makes sense when the user(s) have branched the code to work on a fix or enhancement and you want to restrict the ability to push code to this branch to a defined set of people.

Note: A Project Owner may not be a branch owner.

You can also impose two additional rules by selecting one or both of the following checkboxes:

Do not allow forced pushes: Select this checkbox to ensure that, if there are any merge conflicts, no code can be pushed to the branch using the force push provision in Git.

Do not allow renaming or deleting the branch: Select this checkbox to ensure that nobody can rename the branch or delete it manually or as part of the merge request.

You can save the configuration by clicking the Save button or discard it by clicking the Discard button.

 

Frozen:

This probably an easy but crucial branch protection option. As the name suggests, it freezes the branch and prevents any further changes. This option comes in handy during code freeze for the release or master branch. Once a branch has been marked as Frozen, it can only be undone by the Project Owner.

You can save the configuration by clicking the Save button or discard it by clicking the Discard button.

Branch protection can help streamline the Release Management for the project and help enforce best practices in your development process.

To learn more about this and other new features in Oracle Developer Cloud,  take a look at the What's New in Oracle Developer Cloud Service document and the links it provides to our product documentation. If you have any questions, you can reach us on the Developer Cloud slack channel or in the online forum.

Happy Coding!

Pages

Subscribe to Oracle FAQ aggregator