Feed aggregator

Oracle Forms V Oracle APEX Check List

Tom Kyte - 9 hours 32 min ago
Oracle Forms has some strength and is still the best BackOffice tool from ORACLE from my Point of view. Here are some issues that I miss with APEX. Maybe you already have these Options in 18.x.? Can you check this list: 1. 100% accessiblity for...
Categories: DBA Blogs

Display blob pdf

Tom Kyte - 9 hours 32 min ago
Dear, I have table lob_table( id number ,doc blob ,namefile varchar (200)) I would like to display blob doc who is pdf file and print it in sqldeveloper. i have create this procedure is it corrects ? CREATE OR REPLACE PROCEDURE PROC2 AS...
Categories: DBA Blogs

Java9 new features

Yann Neuhaus - 10 hours 45 min ago

java9

Java9 is on its way now, in this blog I’ll talk about the new features I found interesting, the performances and so on.

Configure Eclipse for Java9

Prior to Eclipse Oxygen 4.7.1a, you’ll have to configure eclipse a little bit to make it run your Java9 projects.

Add in eclipse.ini after –launcher.appendVmargs

-vm
C:\Program Files\Java\jdk-9.0.4\bin\javaw.exe

 

Still in eclipse.ini add:

--add-modules=ALL-SYSTEM

 

You should have something like this:

--launcher.appendVmargs
-vm
C:\Program Files\Java\jdk-9.0.4\bin\javaw.exe
-vmargs
-Dosgi.requiredJavaVersion=1.6
-Xms40m
-Xmx512m
--add-modules=ALL-SYSTEM
New Features  Modules

Like a lot of other languages, and in order to obfuscate a little more the code, java is going to use Modules. It simply means that you’ll be able to make your code requiring a specific library. This is quite helpful for small memory device that do not need the whole JVM to be loaded. You can have a list of available modules here.

When creating a module, you’ll generate a file called module-info.java which will be like:

module test.java9 {
	requires com.dbiservices.example.engines;
	exports com.dbiservices.example.car;
}

Here my module requires the “engines” module and exports the “car” module. This allows to only load classes related to our business and not some side libraries, it will help managing memory more efficiently but also requires some understanding regarding the module system. In addition, it creates a real dependency system between jars, and prevent using public classes that were not supposed to be exposed through the API. It prevents some strange behavior when you have duplicates entries, like several jar versions in the classpath. All non-exported modules will be encapsulated by default

 JShell

Java9 now provides a JShell, like other languages you can now execute java code through a java shell command prompt. Simply starts jshell from the JDK in the bin folder:

jshellThis kind of tool can greatly improve productivity for small tests, you don’t have to create small testing classes anymore. Very useful for regular expressions testing for example.

New HTTP API

The old http api is being upgraded finally. It now supports WebSockets and HTTP/2 protocol out of the box. For the moment the API is placed in an incubator module, that mean it can still change a little, but you can start playing with like following:

import jdk.incubator.http.*;
public class Run {

public static void main(String[] args) throws IOException, InterruptedException {
  HttpClient client = HttpClient.newHttpClient();
  HttpRequest req = HttpRequest.newBuilder(URI.create("http://www.google.com"))
		              .header("User-Agent","Java")
		              .GET()
		              .build();
  HttpResponse<String> resp = client.send(req, HttpResponse.BodyHandler.asString());
}

You’ll have to setup module-info.java accordingly:

module test.java9 {
	requires jdk.incubator.httpclient;
}
 Private interface methods

Since Java 8, an interface can contain behavior instead of only a method signature. But if you have several methods doing quite the same thing, usually you can refactor those methods into a private one. But default methods in java 8 can’t be private. In Java 9 you can add private helper methods to interfaces which can solve this issue:

public interface CarContract {

	void normalMethod();
	default void defaultMethod() {doSomething();}
	default void secondDefaultMethod() {doSomething();}
	
	private void doSomething(){System.out.println("Something");}
}

The private method “doSomething()” is hidden from the exposure of the interface.

 Unified JVM Logging

Java 9 adds a handy feature to debug the JVM thanks to logging. You can now enable logging for different tags like gc, compiler, threads and so on. You can set it thanks to the command line parameter -Xlog. Here’s an example of the configuration for the gc tag, using debug level without decoration:

-Xlog:gc=debug:file=log/gc.log:none

And the result:

ConcGCThreads: 2
ParallelGCThreads: 8
Initialize mark stack with 4096 chunks, maximum 16384
Using G1
GC(0) Pause Young (G1 Evacuation Pause) 24M->4M(254M) 5.969ms
GC(1) Pause Young (G1 Evacuation Pause) 59M->20M(254M) 21.708ms
GC(2) Pause Young (G1 Evacuation Pause) 50M->31M(254M) 20.461ms
GC(3) Pause Young (G1 Evacuation Pause) 84M->48M(254M) 30.398ms
GC(4) Pause Young (G1 Evacuation Pause) 111M->70M(321M) 31.902ms

We can even merge info:

-Xlog:gc+heap=debug:file=log/heap.log:none

Which results to this:

Heap region size: 1M
Minimum heap 8388608  Initial heap 266338304  Maximum heap 4248829952
GC(0) Heap before GC invocations=0 (full 0):
GC(0)  garbage-first heap   total 260096K, used 24576K [0x00000006c2c00000, 0x00000006c2d007f0, 0x00000007c0000000)
GC(0)   region size 1024K, 24 young (24576K), 0 survivors (0K)
GC(0)  Metaspace       used 6007K, capacity 6128K, committed 6272K, reserved 1056768K
GC(0)   class space    used 547K, capacity 589K, committed 640K, reserved 1048576K
GC(0) Eden regions: 24->0(151)
GC(0) Survivor regions: 0->1(3)
GC(0) Old regions: 0->0
GC(0) Humongous regions: 0->0
GC(0) Heap after GC invocations=1 (full 0):
GC(0)  garbage-first heap   total 260096K, used 985K [0x00000006c2c00000, 0x00000006c2d007f0, 0x00000007c0000000)
GC(0)   region size 1024K, 1 young (1024K), 1 survivors (1024K)
GC(0)  Metaspace       used 6007K, capacity 6128K, committed 6272K, reserved 1056768K
GC(0)   class space    used 547K, capacity 589K, committed 640K, reserved 1048576K
GC(1) Heap before GC invocations=1 (full 0):
GC(1)  garbage-first heap   total 260096K, used 155609K [0x00000006c2c00000, 0x00000006c2d007f0, 0x00000007c0000000)
GC(1)   region size 1024K, 152 young (155648K), 1 survivors (1024K)
GC(1)  Metaspace       used 6066K, capacity 6196K, committed 6272K, reserved 1056768K
GC(1)   class space    used 548K, capacity 589K, committed 640K, reserved 1048576K
GC(1) Eden regions: 151->0(149)
GC(1) Survivor regions: 1->3(19)
...
...

There are other new features not detailed here, but you can find a list here.

 

Cet article Java9 new features est apparu en premier sur Blog dbi services.

Configure AFD with Grid Infrastructure software (SIHA & CRS) from very beginning.

Yann Neuhaus - 11 hours 33 min ago

Introduction :

Oracle ASM Filter Driver (Oracle ASMFD) simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.
Oracle ASM Filter Driver (Oracle ASMFD) is a kernel module that resides in the I/O path of the Oracle ASM disks. Oracle ASM uses the filter driver to validate write I/O requests to Oracle ASM disks.

In this blog I will explain how to setup a Grid Infrastructure software within AFD on an architecture SIHA or CRS

Case1. You want to configure AFD from very beginning (no UDEV, no ASMLib) with SIHA, Single Instance High Availability installation (former Oracle Restart)

Issue :

If we want to use AFD driver from very beginning, we should use Oracle AFD to prepare some disks for the ASM instance,
The issue is coming from the fact that AFD will be available just after the installation (can be configured before the installation)!

Solution :

Step1. Install GI stack in software only mode

setup_soft_only

Step2. Run root.sh when is prompted, without any other action(do not execute generated script rootUpgrade.sh)

Step3. Run roothas.pl to setup your HAS stack

[root] /u01/app/grid/product/12.2.0/grid/perl/bin/perl -I /u01/app/grid/product/12.2.0/grid/perl/lib -I /u01/app/grid/product/12.2.0/grid/crs/install /u01/app/grid/product/12.2.0/grid/crs/install/roothas.pl

Step4. As root user proceed to configure AFD

 /u01/app/grid/product/12.2.0/grid/bin/crsctl stop has -f
/u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_configure
/u01/app/grid/product/12.2.0/grid/bin/crsctl start has

Step5.  Setup AFD string to discover new devices , as grid user

 /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_dsset '/dev/sd*'

Step6. Label new disk as root

 /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_label DISK1 /dev/sdb1

Step7. As grid user, launch ASMCA , to create your ASM instance, based on the diskgroup created on the new labeled disk , DISK1

disk_AFD

disk_AFD

Step8. Display AFD driver  within HAS stack.

check_res

 

Case2. You want to configure AFD from very beginning (no UDEV, no ASMLib) with CRS : Cluster Ready Services

Issue :

By installing on software-only mode, you will just copy and relink the binaries.
No wrapper scripts are created as (crsctl or clsecho).
The issue consists that AFD needs wrapper scripts and not the binaries (crsctl.bin).

Solution :

Step1.Do it on all nodes.

Install Grid Infrastructure on the all nodes of the future cluster in the mode “Software-only Installation”.

setup_soft_only

Step2. Do it on all nodes.

After the installation the wrapper scripts are not present. You can copy from any other installation (SIHA too) or use a cloned home.

After getting the two scripts , modify the variables inside them to be aligned with your current system used for installation:

ORA_CRS_HOME=/u01/app/grid/product/12.2.0/grid  --should be changed
MY_HOST=dbi1 –should be changed
ORACLE_USER=grid
ORACLE_HOME=$ORA_CRS_HOME
ORACLE_BASE=/u01/app/oracle
CRF_HOME=/u01/app/grid/product/12.2.0/grid –should be changed

Step3. Do it on all nodes

Configure AFD :

[root@dbi1 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_configure
AFD-627: AFD distribution files found.
AFD-634: Removing previous AFD installation.
AFD-635: Previous AFD components successfully removed.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
Modifying resource dependencies - this may take some time.

Step4. Do it only on the first node.

Scan & label the new disks using AFD.

/u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_label DISK1 /dev/sdb1
/u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_label DISK1 /dev/sdc1
/u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_label DISK1 /dev/sdd1
[root@dbi1 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_scan
[root@dbi1 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_lsdsk
--------------------------------------------------------------------------------

Label                     Filtering   Path

================================================================================

DISK1                       ENABLED   /dev/sdb1

DISK2                       ENABLED   /dev/sdc1

DISK3                       ENABLED   /dev/sdd1

Step5. Do it on the other nodes.

Scan and display the disks on the other nodes of the future cluster. No need to label them again.

[root@dbi2 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_scan
[root@dbi2 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
DISK1                       ENABLED   /dev/sdb1
DISK2                       ENABLED   /dev/sdc1
DISK3                       ENABLED   /dev/sdd1

Step6. Do it on 1st node

Run the script config.sh as oracle/grid user

/u01/app/grid/product/12.2.0/grid/crs/config/config.sh

config_luster

Step7. Do it on 1st node

Setup the connectivity between all the future nodes of the cluster and follow the wizard.

conn_all_nodes

Step8. Do it on 1st node

You will be asked to create a ASM diskgroup.

Normally without doing previous steps , will not be possible , as no udev no ASMLib no AFD configured. So no labeled disks for that step.

create_asm_DG

But…….

Step9. Do it on 1st node

Change discovery path to ‘AFD:*’and should retrieve the disks labeled on the previous step.

afd_path

Step10. Do it on 1st node

Provide AFD labeled disks to create the ASM disk group for the OCR files.Uncheck “Configure Oracle ASM Filter Driver”

CREATE_ASM_DG_2

Step11. Do it on 1st node

Finalize the configuration as per documentation.

 

Additionally, another way (easier ) to install/configure ASM Filter Driver you can find here :
https://blog.dbi-services.com/oracle-18c-cluster-with-oracle-asm-filter-driver/

Summary : Using the scenarios described above , we can configure Grid Infrastructure stack within AFD on  a SIHA or CRS architecture.

 

Cet article Configure AFD with Grid Infrastructure software (SIHA & CRS) from very beginning. est apparu en premier sur Blog dbi services.

Oracle Utilities Application Framework V4.3.0.6.0 Release

Anthony Shorten - Mon, 2018-09-17 18:05

Oracle Utilities Application Framework V4.3.0.6.0 based products will be released over the coming months. As with past release the Oracle Utilities Application Framework has been enhanced with new and updated features for on-premise, hybrid and cloud implementations of Oracle Utilities products.

The Oracle Utilities Application Framework continues to provide a flexible and wide ranging set of common services and technology to allow implementations the ability to meet the needs of their customers.  The latest release provides a wide range of new and updated capabilities to reduce costs and introduce exciting new functionality. The products ships with a complete listing of the changes and new functionality but here are some highlights:

  • Improved REST Support - The REST support for the product has been enhanced in this release. It is now possible to register REST Services in Inbound Web Services as REST. Inbound Web Services definitions have been enhanced to support both SOAP and REST Services. This has the advantage that the registration of integration is now centralized and the server URL for the services can be customized to suit individual requirements. It is now possible to register multiple REST Services within a single Inbound Web Services to reduce costs in management and operations. Execution of the REST Services has been enhanced to use the Registry as the first reference for a service. No additional deployment effort is necessary for this capability. A separate article on this topic will provide additional information.
  • Improved Web Registry Support for Integration Cloud Service - With the changes in REST and other integration changes such as Categories and supporting other adapters, the Web Service Catalog has been expanded to add support REST and other services directly for integration registration for use in the Oracle Integration Cloud.
  • File Access Adapter - In this release a File Adapter has been introduced to allow implementations to parameterize all file integration to reduce costs of management of file paths and ease the path to the Oracle Cloud. In Cloud implementations, an additional adapter is available to allow additional storage on the Oracle Object Storage Cloud to supplement cloud storage for Oracle Utilities SaaS solutions. The File Access Adapter includes an Extendable Lookup to define alias and physical location attributes. That lookup can then be used an alias for file paths in Batch Controls, etc.. A separate article on this topic will provide additional information.
  • Batch Start/End Date Time now part of Batch Instance Object - In past releases the Batch Start and End Dates and times where located as data elements with the thread attributes. This made analysis harder to perform. In this release these fields have been promoted as reportable fields directly on the Batch Instance Object for each thread. This will improve capabilities for reporting performance of batch jobs. For backward compatibility, these fields are only populated for new executions. The internal Business Service F1-GetBatchRunStartEnd has been extended to support the new columns and also detect old executions to return the correct values regardless.
  • New Level of Service Algorithms - In past releases, Batch Level Of Service required the building of custom algorithms for checking batch levels. In this release additional base algorithms for common scenarios like Total Run Time, Throughput and Error Rate are now provided for use. Additionally, it is now possible to define multiple Batch Level Of Service algorithms to model complex requirements. The Health Check API has been enhanced to return the Batch Level Of Service as well as other health parameters. A separate article on this topic will provide additional information.
  • Job Scope in DBMS_SCHEDULER interface - The DBMS_SCHEDULER Interface allowed for specification Batch Control and Global level of parameters as well as at runtime. In this release, it is possible to pre-define parameters within the interface at the Job level, allowing for support for control of individual instances Batch Controls that are used more than once across chains.
  • Ad-hoc Recalculation of To Do Priority - In the past release of the Oracle Utilities Application Framework an algorithm to dynamically reassess ad recalculate a To Do Priority was introduced. In this release, it is possible to invoke this algorithm in bulk using the new provided F1-TDCLP Batch Control.  This can be used with the algorithm to reassess To Do's to improve manual processing.
  • Introduction of a To Do Monitor Process and Algorithm - One of the issues with To Do's in the field has been that users can forget to manually close the To Do when the issue that caused the condition has been resolved. In this release a new batch control F1-TDMON and a new Monitor algorithm on the To Do Type has been added so that logic can be introduced to detect the resolution of the issue can lead to the product automatically closing the To Do.
  • New Schema Editor - Based upon feedback from partners and customers, the usability and capabilities of the Schema Editor have been improved to provide more information as part of the basic views to reduce rework and support cross browser development.
  • Process Flow Editor - A new capability has been added to the Oracle Utilities Application Framework to allow for complex workflows to be modeled and fully capable workflow introduced. This includes train support (including advanced navigation), saving incomplete work support, branching and object integration. This process flow editor was introduced internally successfully to use for our cloud automation in the Oracle Utilities Cloud Services Foundation and has now been introduced, in a new format, for use across the Oracle Utilities Application Framework based products. A separate article on this topic will provide additional information.
  • Improved Google Chrome Support - This release introduces extensive Google Chrome for Business support. Check the availability with each of the individual Oracle Utilities Application Framework based products.
  • New Cube Viewer - In the Oracle Utilities Market Settlements product we introduced a new Cube Viewer to embed advanced analytics into our products. That capability has been made generic and now included in the Oracle Utilities Application Framework so that products and implementations can now build their own cube analytical capabilities. In this release a series of new objects and ConfigTools objects have been introduce to build Cube Viewer based solutions. Note: The Cube Viewer has been built to operate independently of Oracle In-Memory Database support but would greatly benefit from use with Oracle In-Memory Database. A separate article on this topic will provide additional information.
  • Object Erasure Support - To support various data privacy regulations introduced across the world, a new Object Erasure capability has been introduced to manage the erasure or obfuscation of master objects within the Oracle Utilities Application Framework based products. This capability is complementary to the Information Lifecycle Management (ILM) capability introduced to manage transaction objects within the product. A number of objects and ConfigTools objects have been introduced to allow implementations to add Object Erasure to their implementations. A separate article on this topic will provide additional information.
  • Proactive Update ILM Switch Support - In past release, ILM eligibility and the ILM switch was performed in bulk exclusively by the ILM batch processes or using the Automatic Data Optimization (ADO) feature of the Oracle Database. To work more efficiently, it is now possible use the new BO Enter Status plug-in and BO Exit Status plug-in to proactively assess the eligibility and set the ILM switch as part of processing, thus reducing ILM workloads.
  • Mobile Framework Auto Deploy Support - This releases includes a new optional parameter to auto deploy mobile content automatically when a deployment is saved. This can avoid the extra manual deployment step, if desired.
  • Required Indicator on Legacy Screens - In past releases, the required indicator, based upon meta data, has been introduced for ConfigTools based objects, in this release it has been extended to Oracle Utilities Application Framework using legacy screens built using the Oracle Utilities SDK or custom JSP (that confirm to the standards required by the Oracle Utilities Application Framework). Note: Some custom JSP's may contain logic to prevent the correct display the required indicator.
  • Oracle Identity Manager integration improved - In this release the integration with Oracle Identity Manager has been improved with multiple adapters supported and the parameters are now located as a Feature Configuration rather than properties settings. This allows the integration setup to be migrated using Configuration Migration Assistant.
  • Outbound Message Mediator Improvements - In previous releases, implementations were required to use the Outbound Message Dispatcher (F1-OutmsgMediator) business services to send an outbound message without instantiating it but where the outbound message Business Object pre-processing algorithms need to be executed.  This business service orchestrated a creation and deletion of the outbound message, which is not desirable for performance reasons. The alternate business service Outbound Message Mediator (F1-OutmsgMediator) routes a message without instantiating anything, so is preferred when the outbound message should not be instantiated.  However, the Mediator did not execute the Business Object pre-processing algorithms.  In this release the Mediator business service has been enhanced to also execute the Business Object pre-processing algorithms.
  • Deprecations - In this release a few technologies and capabilities will be removed as they were announced in previous releases. These include:
    • XAI Servlet/MPL - After announcing the deprecation of XAI and MPL in 2012, the servlet and MPL software are no longer available in this release. XAI Objects are retained for backward compatibility and last minute migrations to IWS and OSB respectively.
    • Batch On WebLogic - In the Oracle Cloud, batch threadpools were managed under Oracle WebLogic. Given changes to the architecture over the last few releases, the support for threadpools is no longer supported. As this functionality was never released for use on-premise customers, this change does not have any impact to on-premise customers.
    • WebLogic Templates - With the adoption of Oracle WebLogic 12.2+, the necessity of custom WebLogic templates was no longer necessary. It is now possible to use the standard Fusion Middleware templates supplied with Oracle WebLogic with a few manual steps. These additional manual steps are documented in the new version of the Installation Guide supplied with the product. Customers may continue to use the Domain Builder supplied with Oracle WebLogic to build custom templates post Oracle Utilities Application Framework product installation. Customers should stop using the Native Installation or Clustering whitepaper documentation for Oracle Utilities Application Framework V4.3.0.5.0 and above as this information is now inside the Installation Guide directly or Oracle WebLogic 12.2.1.x Configuration Guide (Doc Id: 2413918.1) available from My Oracle Support.

A number of additional articles will be published over the next few weeks going over some of these topics as well as updates to key whitepapers will be published.

Q1 FY19 GAAP EPS UP 13% TO $0.57 and NON-GAAP EPS UP 18% TO $0.71

Oracle Press Releases - Mon, 2018-09-17 14:21
Press Release
Q1 FY19 GAAP EPS UP 13% TO $0.57 and NON-GAAP EPS UP 18% TO $0.71 Fusion Cloud ERP Customer Count Nearly 5,500, NetSuite Cloud ERP Customer Count Over 15,000

Redwood Shores, Calif.—Sep 17, 2018

Oracle Corporation (NYSE: ORCL) today announced fiscal 2019 Q1 results. Total Revenues were $9.2 billion, up 1% in U.S. dollars and up 2% in constant currency, compared to Q1 last year. Total Cloud Services and License Support plus Cloud License and On-Premise License revenues were up 2% to $7.5 billion. Cloud Services and License Support revenues were $6.6 billion, while Cloud License and On-Premise License revenues were $867 million. Without the strengthening of the U.S. dollar compared to foreign currencies, Oracle’s reported GAAP and non-GAAP Total Revenues would have been $66 million higher, and Earnings Per Share would have been 1 cent higher.

GAAP Operating Income was up 1% to $2.8 billion and GAAP Operating Margin was 30%. Non-GAAP Operating Income was up 1% to $3.8 billion and non-GAAP Operating Margin was 41%. GAAP Net Income was up 6% to $2.3 billion and non-GAAP Net Income was up 10% to $2.8 billion. GAAP Earnings Per Share was up 13% to $0.57 while non-GAAP Earnings Per Share was up 18% to $0.71.

Short-term deferred revenues were up 2% to $10.3 billion compared to a year ago. Operating Cash Flow was up 5% to $15.5 billion during the trailing twelve months.

“We are off to an excellent start with Q1 non-GAAP earnings per share growing 19% in constant currency,” said Oracle CEO, Safra Catz. “That strong earnings per share growth rate increases my confidence that we will deliver on another fiscal year of double-digit non-GAAP earnings per share growth.”

“The vast majority of ERP applications running in the cloud are either Oracle Fusion or Oracle NetSuite systems,” said Oracle CEO, Mark Hurd. “In the first quarter, we increased our market share as customers continued to buy Oracle Fusion ERP to replace their existing SAP and Workday ERP systems. The Oracle Fusion ERP customer count is now nearly 5,500, while the NetSuite ERP customer count is over 15,000. Virtually every analyst ranks Oracle as the market leader in cloud ERP.”

“The Oracle Autonomous Database is now available on our second generation, highly-secure “Bare-Metal” cloud infrastructure,” said Oracle CTO, Larry Ellison. “Oracle’s Autonomous Database is faster, easier-to-use, more reliable, more secure and much lower cost than Amazon’s databases. And Oracle is the only database that can automatically patch itself while running to protect your data from data theft. These are just some of the reasons why Amazon uses the Oracle database to run its business.”

The Board of Directors increased the authorization for share repurchases by $12.0 billion. The Board of Directors also declared a quarterly cash dividend of $0.19 per share of outstanding common stock. This dividend will be paid to stockholders of record as of the close of business on October 16, 2018, with a payment date of October 30, 2018.

Q1 Fiscal 2019 Earnings Conference Call and Webcast

Oracle will hold a conference call and webcast today to discuss these results at 2:00 p.m. Pacific. You may listen to the call by dialing (816) 287-5563, Passcode: 425392. To access the live webcast, please visit the Oracle Investor Relations website at http://www.oracle.com/investor. In addition, Oracle’s Q1 results and fiscal 2019 financial tables are available on the Oracle Investor Relations website.

A replay of the conference call will also be available by dialing (855) 859-2056 or (404) 537-3406, Passcode: 6387377.

Contact Info
Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com
Deborah Hellinger
Oracle Corporate Communciations
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE: ORCL), visit www.oracle.com/ or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

“Safe Harbor” Statement

Statements in this press release relating to Oracle's future plans, expectations, beliefs, intentions and prospects, including statements regarding the growth of our non-GAAP EPS are all “forward-looking statements” and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. We presently consider the following to be among the important factors that could cause actual results to differ materially from expectations: (1) Our cloud strategy, including our Oracle Software as a Service, Platform as a Service, Infrastructure as a Service and Data as a Service offerings, may not be successful. (2) If we are unable to develop new or sufficiently differentiated products and services, integrate acquired products and services, or enhance and improve our existing products and support services in a timely manner, or price our products and services to meet market demand, customers may not purchase or subscribe to our software, hardware or cloud offerings or renew software support, hardware support or cloud subscriptions contracts. (3) Enterprise customers rely on our cloud, license and hardware offerings and related services to run their businesses and significant coding, manufacturing or configuration errors in our cloud, license and hardware offerings and related services could expose us to product liability, performance and warranty claims, as well as cause significant harm to our brand and reputation, which could impact our future sales. (4) If the security measures for our products and services are compromised and as a result, our customers’ data or our IT systems are accessed improperly, made unavailable, or improperly modified, our products and services may be perceived as vulnerable, our brand and reputation could be damaged and we may experience legal claims and reduced sales. (5) Our business practices with respect to data could give rise to operational interruption, liabilities or reputational harm as a result of governmental regulation, legal requirements or industry standards relating to consumer privacy and data protection. (6) Economic, political and market conditions can adversely affect our business, results of operations and financial condition, including our revenue growth and profitability, which in turn could adversely affect our stock price. (7) Our international sales and operations subject us to additional risks that can adversely affect our operating results. (8) We have a selective and active acquisition program and our acquisitions may not be successful, may involve unanticipated costs or other integration issues or may disrupt our existing operations. A detailed discussion of these factors and other risks that affect our business is contained in our U.S. Securities and Exchange Commission (SEC) filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading “Risk Factors.” Copies of these filings are available online from the SEC or by contacting Oracle Corporation’s Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of September 17, 2018. Oracle undertakes no duty to update any statement in light of new information or future events. 

Talk to a Press Contact

Ken Bond

  • +1.650.607.0349

Deborah Hellinger

  • +1.212.508.7935

User Session lost using ADF Application

Yann Neuhaus - Mon, 2018-09-17 11:45

In one of my missions, I was involved in a new Fusion Middleware 12C (12.2.1.2) installation with an ADF application and an Oracle report server instance deployments .
This infrastructure is protected using an Access Manager Single Sign on Server.
In Production, the complete environment is fronted by a WAF server ending the https.
On the TEST The complete environment is fronted by a SSL reverse proxy ending the https.

In the chosen architecture, all Single Sign On request goes directly through the reverse proxy to the OAM servers.
The Application requests and the reports requests are routed through a HTTP server having the WebGate installed.

Below is an extract of the SSL part of the reverse Proxy configuration:
# SSL Virtual Host
<VirtualHost 10.0.1.51:443>
ServerName https://mySite.com
ErrorLog logs/ssl_errors.log
TransferLog logs/ssl_access.log
HostNameLookups off
ProxyPreserveHost On
ProxyPassReverse /oam http://appserver.example.com:14100/oam
ProxyPass /oam http://appserver.example.com:14100/oam
ProxyPassReverse /myCustom-sso-web http://appserver.example.com:14100/myCustom-sso-web
ProxyPass /myCustom-sso-web http://appserver.example.com:14100/myCustom-sso-web
ProxyPass /reports http://appserver.example.com:7778/reports
ProxyPassReverse /reports http://appserver.example.com:7778/reports
ProxyPass /myApplication http://appserver.example.com:7778/myApplication
ProxyPassReverse /myApplication http://appserver.example.com:7778/myApplication
# SSL configuration
SSLEngine on
SSLCertificateFile /etc/httpd/conf/ssl/myStite_com.crt
SSLCertificateKeyFile /etc/httpd/conf/ssl/mySite_com.key
</VirtualHost>

HTTP Server Virtual hosts:
# Local requests
Listen 7778
<VirtualHost *:7778>
ServerName http://appserver.example.com:7778
# Rewrite included for OAM logout redirection
RewriteRule ^/oam/(.*)$ http://appserver.example.com:14100/oam/$1
RewriteRule ^/myCustom-sso-web/(.*)$ http://appserver.example.com:14100/myCustom-sso-sso-web/$1
</VirtualHost>

<VirtualHost *:7778>
ServerName https://mySite.com:443
</VirtualHost>

The ADf application and the reports servers mapping is done using custom configuration files included in http.conf files
#adf.conf
#----------
<Location /myApplication>
SetHandler weblogic-handler
WebLogicCluster appserver.example.com:9001,appserver1.example.com:9003
WLProxySSLPassThrough ON
</Location>

# Force caching for image files
<FilesMatch "\.(jpg|jpeg|png|gif|swf)$">
Header unset Surrogate-Control
Header unset Pragma
Header unset Cache-Control
Header unset Last-Modified
Header unset Expires
Header set Cache-Control "max-age=86400, public"
Header set Surrogate-Control "max-age=86400"
</FilesMatch>

#reports.conf
#-------------
<Location /reports>
SetHandler weblogic-handler
WebLogicCluster appserver.example.com:9004,appserver1.example.com:9004
DynamicServerList OFF
WLProxySSLPassThrough ON
</Location>

After configuring a ADF application and the Reports Server to be protected through the WebGate, the users can connect and work without any issue during the first 30 minutes.
Then they loose their sessions. We thought first it was related to the session timeouts or inactivity timeout.
We increased the values of those timeouts without success.
We checked the logs and found out that the issue was related to the OAM and WebGate cookies.

The OAM Server gets and sets a cookie named OAM_ID.
Each WebGate gets and sets a cookie named OAMAuthnCookie_ + the host name and port.

The contents of the cookies are:

Authenticated User Identity (User DN)
Authentication Level
IP Address
SessionID (Reference to Server side session – OAM11g Only)
Session Validity (Start Time, Refresh Time)
Session InActivity Timeouts (Global Inactivity, Max Inactivity)
Validation Hash

The validity of a WebGate handled user session is 30 minutes by default and then the WebGate checks the OAM cookies.
Those cookies are secured and were lost because they were not forwarded by the WAF or the reverse proxy due to ending of the https.

We needed to changes the SSL reverse proxy configuration to send the correct information to the WebLogic Server and HTTP Server about ending SSL at reverse proxy level.
This has been done adding two HTTP Headers to the request before sending them to the Oracle Access Manager or Fusion Middleware HTTP Server.

# For the WebLogic Server to be informed about SSL ending at reverse proxy level
RequestHeader set WL-Proxy-SSL true
# For the Oracle HTTP Server to take the secure cookies in account
RequestHeader set X-Forwarded-Proto “https”

The WAF needed to be configured to do the same HTTP Headers adds in the production environment.

After those changes, the issue was solved.

 

Cet article User Session lost using ADF Application est apparu en premier sur Blog dbi services.

EDB containers for OpenShift 2.3 – PEM integration

Yann Neuhaus - Mon, 2018-09-17 11:19

A few days ago EnterpriseDB announced the availability of version 2.3 of the EDB containers for OpenShift. The main new feature in this release is the integration of PEM (Postgres Enterprise Manager), so in this post we’ll look at how we can bring up a PEM server in OpenShift. If you did not follow the lats posts about EDB containers in OpenShift here is the summary:

The first step you need to do is to download the updated container images. You’ll notice that there are two new containers which have not been available before the 2.3 release:

  • edb-pemserver: Obviously this is the PEM server
  • admintool: a utility container for supporting database upgrades and launching PEM agents on the database containers

For downloading the latest release of the EDB container images for OpenShift, the procedure is the following:

docker run -d -p 5000:5000 --restart=always --name registry registry:2
docker login containers.enterprisedb.com

docker pull containers.enterprisedb.com/edb/edb-as:v10
docker tag containers.enterprisedb.com/edb/edb-as:v10 localhost:5000/edb/edb-as:v10
docker push localhost:5000/edb/edb-as:v10

docker pull containers.enterprisedb.com/edb/edb-pgpool:v3.6
docker tag containers.enterprisedb.com/edb/edb-pgpool:v3.6 localhost:5000/edb/edb-pgpool:v3.6
docker push localhost:5000/edb/edb-pgpool:v3.6

docker pull containers.enterprisedb.com/edb/edb-pemserver:v7.3
docker tag containers.enterprisedb.com/edb/edb-pemserver:v7.3 localhost:5000/edb/edb-pemserver:v7.3
docker push localhost:5000/edb/edb-pemserver:v7.3

docker pull containers.enterprisedb.com/edb/edb-admintool
docker tag containers.enterprisedb.com/edb/edb-admintool localhost:5000/edb/edb-admintool
docker push localhost:5000/edb/edb-admintool

docker pull containers.enterprisedb.com/edb/edb-bart:v2.1
docker tag containers.enterprisedb.com/edb/edb-bart:v2.1 localhost:5000/edb/edb-bart:v2.1
docker push localhost:5000/edb/edb-bart:v2.1

In my case I have quite a few EDB containers available now (…and I could go ahead and delete the old ones, of course):

docker@minishift:~$ docker images | grep edb
containers.enterprisedb.com/edb/edb-as          v10                 1d118c96529b        45 hours ago        1.804 GB
localhost:5000/edb/edb-as                       v10                 1d118c96529b        45 hours ago        1.804 GB
containers.enterprisedb.com/edb/edb-admintool   latest              07fda249cf5c        10 days ago         531.6 MB
localhost:5000/edb/edb-admintool                latest              07fda249cf5c        10 days ago         531.6 MB
containers.enterprisedb.com/edb/edb-pemserver   v7.3                78954c316ca9        10 days ago         1.592 GB
localhost:5000/edb/edb-pemserver                v7.3                78954c316ca9        10 days ago         1.592 GB
containers.enterprisedb.com/edb/edb-bart        v2.1                e2410ed4cf9b        10 days ago         571 MB
localhost:5000/edb/edb-bart                     v2.1                e2410ed4cf9b        10 days ago         571 MB
containers.enterprisedb.com/edb/edb-pgpool      v3.6                e8c600ab993a        10 days ago         561.1 MB
localhost:5000/edb/edb-pgpool                   v3.6                e8c600ab993a        10 days ago         561.1 MB
containers.enterprisedb.com/edb/edb-as                              00adaa0d4063        3 months ago        979.3 MB
localhost:5000/edb/edb-as                                           00adaa0d4063        3 months ago        979.3 MB
localhost:5000/edb/edb-pgpool                   v3.5                e7efdb0ae1be        4 months ago        564.1 MB
containers.enterprisedb.com/edb/edb-pgpool      v3.5                e7efdb0ae1be        4 months ago        564.1 MB
localhost:5000/edb/edb-as                       v10.3               90b79757b2f7        4 months ago        842.7 MB
containers.enterprisedb.com/edb/edb-bart        v2.0                48ee2c01db92        4 months ago        590.6 MB
localhost:5000/edb/edb-bart                     2.0                 48ee2c01db92        4 months ago        590.6 MB
localhost:5000/edb/edb-bart                     v2.0                48ee2c01db92        4 months ago        590.6 MB

The only bits I changed in the yaml file that describes my EDB AS deployment compared to the previous posts are these (check the high-lightened lines, there are only two):

apiVersion: v1
kind: Template
metadata:
   name: edb-as10-custom
   annotations:
    description: "Custom EDB Postgres Advanced Server 10.0 Deployment Config"
    tags: "database,epas,postgres,postgresql"
    iconClass: "icon-postgresql"
objects:
- apiVersion: v1 
  kind: Service
  metadata:
    name: ${DATABASE_NAME}-service 
    labels:
      role: loadbalancer
      cluster: ${DATABASE_NAME}
  spec:
    selector:                  
      lb: ${DATABASE_NAME}-pgpool
    ports:
    - name: lb 
      port: ${PGPORT}
      targetPort: 9999
    sessionAffinity: None
    type: LoadBalancer
- apiVersion: v1 
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-pgpool
  spec:
    replicas: 2
    selector:
      lb: ${DATABASE_NAME}-pgpool
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        labels:
          lb: ${DATABASE_NAME}-pgpool
          role: queryrouter
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-pgpool
          env:
          - name: DATABASE_NAME
            value: ${DATABASE_NAME} 
          - name: PGPORT
            value: ${PGPORT} 
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres'
          - name: REPL_PASSWORD
            value: 'postgres'
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: localhost:5000/edb/edb-pgpool:v3.6
          imagePullPolicy: IfNotPresent
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5
    triggers:
    - type: ConfigChange
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-as10-0
  spec:
    replicas: 1
    selector:
      db: ${DATABASE_NAME}-as10-0 
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        creationTimestamp: null
        labels:
          db: ${DATABASE_NAME}-as10-0 
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-as10 
          env:
          - name: DATABASE_NAME 
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER 
            value: ${DATABASE_USER} 
          - name: DATABASE_USER_PASSWORD
            value: 'postgres'
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres'
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: REPL_PASSWORD
            value: 'postgres'
          - name: PGPORT
            value: ${PGPORT} 
          - name: RESTORE_FILE
            value: ${RESTORE_FILE} 
          - name: LOCALEPARAMETER
            value: ${LOCALEPARAMETER}
          - name: CLEANUP_SCHEDULE
            value: ${CLEANUP_SCHEDULE}
          - name: EFM_EMAIL
            value: ${EFM_EMAIL}
          - name: NAMESERVER
            value: ${NAMESERVER}
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName 
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP 
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: localhost:5000/edb/edb-as:v10
          imagePullPolicy: IfNotPresent 
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5 
          livenessProbe:
            exec:
              command:
              - /var/lib/edb/testIsHealthy.sh
            initialDelaySeconds: 600 
            timeoutSeconds: 60 
          ports:
          - containerPort: ${PGPORT} 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
          - name: ${BACKUP_PERSISTENT_VOLUME}
            mountPath: /edbbackup
          - name: pg-initconf
            mountPath: /initconf
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        volumes:
        - name: ${PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${PERSISTENT_VOLUME_CLAIM}
        - name: ${BACKUP_PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${BACKUP_PERSISTENT_VOLUME_CLAIM}
        - name: pg-initconf
          configMap:
            name: postgres-map
    triggers:
    - type: ConfigChange
parameters:
- name: DATABASE_NAME
  displayName: Database Name
  description: Name of Postgres database (leave edb for default)
  value: 'edb'
- name: DATABASE_USER
  displayName: Default database user (leave enterprisedb for default)
  description: Default database user
  value: 'enterprisedb'
- name: REPL_USER
  displayName: Repl user
  description: repl database user
  value: 'repl'
- name: PGPORT
  displayName: Database Port
  description: Database Port (leave 5444 for default)
  value: "5444"
- name: LOCALEPARAMETER
  displayName: Locale
  description: Locale of database
  value: ''
- name: CLEANUP_SCHEDULE
  displayName: Host Cleanup Schedule
  description: Standard cron schedule - min (0 - 59), hour (0 - 23), day of month (1 - 31), month (1 - 12), day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0). Leave it empty if you dont want to cleanup.
  value: '0:0:*:*:*'
- name: EFM_EMAIL
  displayName: Email
  description: Email for EFM
  value: 'none@none.com'
- name: NAMESERVER
  displayName: Name Server for Email
  description: Name Server for Email
  value: '8.8.8.8'
- name: PERSISTENT_VOLUME
  displayName: Persistent Volume
  description: Persistent volume name
  value: ''
  required: true
- name: PERSISTENT_VOLUME_CLAIM 
  displayName: Persistent Volume Claim
  description: Persistent volume claim name
  value: ''
  required: true
- name: BACKUP_PERSISTENT_VOLUME
  displayName: Backup Persistent Volume
  description: Backup Persistent volume name
  value: ''
  required: false
- name: BACKUP_PERSISTENT_VOLUME_CLAIM
  displayName: Backup Persistent Volume Claim
  description: Backup Persistent volume claim name
  value: ''
  required: false
- name: RESTORE_FILE
  displayName: Restore File
  description: Restore file location
  value: ''
- name: ACCEPT_EULA
  displayName: Accept end-user license agreement (leave 'Yes' for default)
  description: Indicates whether user accepts the end-user license agreement
  value: 'Yes'
  required: true

As the template starts with one replica I scaled that to three so finally the setup we start with for PEM is this (one master and two replicas, which is the minimum you need for automated failover anyway):

dwe@dwe:~$ oc get pods -o wide -L role
edb-as10-0-1-4ptdr   1/1       Running   0          7m        172.17.0.5   localhost   standbydb
edb-as10-0-1-8mw7m   1/1       Running   0          5m        172.17.0.6   localhost   standbydb
edb-as10-0-1-krzpp   1/1       Running   0          8m        172.17.0.9   localhost   masterdb
edb-pgpool-1-665mp   1/1       Running   0          8m        172.17.0.8   localhost   queryrouter
edb-pgpool-1-mhgnq   1/1       Running   0          8m        172.17.0.7   localhost   queryrouter

Nothing special happened so far except that we downloaded the new container images, pushed that to the local registry and adjusted the deployment yaml to reference the latest version of the containers. What we want to do now is to create the PEM repository container so that we can add the database to PEM which will give us monitoring and alerting. As PEM requires persistent storage as well we need a new storage definition:

Selection_016

You can of course also get the storage definition using the “oc” command:

dwe@dwe:~$ oc get pvc
NAME                STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
edb-bart-claim      Bound     pv0091    100Gi      RWO,ROX,RWX                   16h
edb-pem-claim       Bound     pv0056    100Gi      RWO,ROX,RWX                   50s
edb-storage-claim   Bound     pv0037    100Gi      RWO,ROX,RWX                   16h

The yaml file for the PEM server is this one (notice that the container image referenced is coming from the local registry):

apiVersion: v1
kind: Template
metadata:
   name: edb-pemserver
   annotations:
    description: "Standard EDB Postgres Enterprise Manager Server 7.3 Deployment Config"
    tags: "pemserver"
    iconClass: "icon-postgresql"
objects:
- apiVersion: v1
  kind: Service
  metadata:
    name: ${DATABASE_NAME}-webservice 
    labels:
      name: ${DATABASE_NAME}-webservice
  spec:
    selector:
      role: pemserver 
    ports:
    - name: https
      port: 30443
      nodePort: 30443
      protocol: TCP
      targetPort: 8443
    - name: http
      port: 30080
      nodePort: 30080
      protocol: TCP
      targetPort: 8080
    type: NodePort
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    name: edb-pemserver
  spec:
    replicas: 1
    selector:
      app: pemserver 
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        creationTimestamp: null
        labels:
          app: pemserver 
          cluster: ${DATABASE_NAME} 
      spec:
        containers:
        - name: pem-db
          env:
          - name: DATABASE_NAME
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER
            value: ${DATABASE_USER}
          - name: ENTERPRISEDB_PASSWORD
            value: "postgres"
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
          - name: PGPORT
            value: ${PGPORT}
          - name: RESTORE_FILE
            value: ${RESTORE_FILE}
          - name: ENABLE_HA_MODE
            value: "No"
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
            image: localhost:5000/edb/edb-as:v10
          imagePullPolicy: Always 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
        - name: pem-webclient 
          image: localhost:5000/edb/edb-pemserver:v7.3
          imagePullPolicy: Always 
          env:
          - name: DATABASE_NAME 
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER 
            value: ${DATABASE_USER} 
          - name: ENTERPRISEDB_PASSWORD
            value: "postgres"
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName 
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP 
          - name: PGPORT
            value: ${PGPORT}
          - name: CIDR_ADDR
            value: ${CIDR_ADDR}
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          - name: DEBUG_MODE
            value: ${DEBUG_MODE}
          ports:
          - containerPort: ${PGPORT} 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
          - name: httpd-shm
            mountPath: /run/httpd
        volumes:
        - name: ${PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${PERSISTENT_VOLUME_CLAIM}
        - name: httpd-shm 
          emptyDir:
            medium: Memory 
        dnsPolicy: ClusterFirst
        restartPolicy: Always
    triggers:
    - type: ConfigChange
parameters:
- name: DATABASE_NAME
  displayName: Database Name
  description: Name of Postgres database (leave edb for default)
  value: 'pem'
  required: true
- name: DATABASE_USER
  displayName: Default database user (leave enterprisedb for default)
  description: Default database user
  value: 'enterprisedb'
- name: PGPORT
  displayName: Database Port
  description: Database Port (leave 5444 for default)
  value: '5444'
  required: true
- name: PERSISTENT_VOLUME
  displayName: Persistent Volume
  description: Persistent volume name
  value: 'edb-data-pv'
  required: true
- name: PERSISTENT_VOLUME_CLAIM 
  displayName: Persistent Volume Claim
  description: Persistent volume claim name
  value: 'edb-data-pvc'
  required: true
- name: RESTORE_FILE
  displayName: Restore File
  description: Restore file location
  value: ''
- name: CIDR_ADDR 
  displayName: CIDR address block for PEM 
  description: CIDR address block for PEM (leave '0.0.0.0/0' for default) 
  value: '0.0.0.0/0' 
- name: ACCEPT_EULA
  displayName: Accept end-user license agreement (leave 'Yes' for default)
  description: Indicates whether user accepts the end-user license agreement
  value: 'Yes'
  required: true

Again, don’t process the template right now, just save it as a template:
Selection_001

Once we have that available we can start to deploy the PEM server from the catalog:
Selection_002

Selection_003

Of course we need to reference the storage definition we created above:
Selection_004

Leave everything else at its defaults and create the deployment:
Selection_005

A few minutes later you should have PEM ready:
Selection_011

For connecting to PEM with your browser have a look at the service definition to get the port:
Selection_012

Once you have that you can connect to PEM:
Selection_013
Selection_014

In the next post we’ll look at how we can add our existing database deployment to our just created PEM server so we can monitor the instances and configure alerting.

 

Cet article EDB containers for OpenShift 2.3 – PEM integration est apparu en premier sur Blog dbi services.

What privilege to view package body

Tom Kyte - Mon, 2018-09-17 08:46
Hi Tom: I have a problem when i grant the package privilege to the other user. A is a normal user which used in factory environment. user B is for app team which can not create anything. First I grant create any procedure ,execute any procedure...
Categories: DBA Blogs

Hardware resource planning

Tom Kyte - Mon, 2018-09-17 08:46
Hello, Thanks for taking up this question. I am interested in understanding how to optimize the hardware resources (cores, memory, disk space) required for Oracle without impacting performance. There are multiple virtual machines in a VMwa...
Categories: DBA Blogs

system user could login without password or incorrect password

Tom Kyte - Mon, 2018-09-17 08:46
hi all, recently i had an incident.. i just logged into the database as system using sqlplus when sqlplus prompted for username i put 'SYS AS SYSDBA' and when prompted for password,instead of entering my password i just hit the ENTER key and s...
Categories: DBA Blogs

Returning count of rows deleted using execute immediate

Tom Kyte - Sun, 2018-09-16 14:46
How to I get the number of rows deleted within PL/SQL using the EXECUTE IMMEDIATE command?
Categories: DBA Blogs

Partitioning -- 5 : List Partitioning

Hemant K Chitale - Sun, 2018-09-16 10:14
List Partitioning allows you to specify a value (or a set of values) for the Partition Key to map to each Partition.

This example shows List Partitioning.

SQL> create table request_queue
2 (request_id number primary key,
3 request_submision_time timestamp,
4 requestor number,
5 request_arg_1 varchar2(255),
6 request_arg_2 varchar2(255),
7 request_arg_3 varchar2(255),
8 request_status varchar2(10),
9 request_completion_time timestamp)
10 partition by list (request_status)
11 (partition p_submitted values ('SUBMITTED'),
12 partition p_running values ('RUNNING'),
13 partition p_errored values ('ERRORED'),
14 partition p_completed values ('COMPLETED'),
15 partition p_miscell values ('RECHECK','FLAGGED','UNKNOWN'),
16 partition p_default values (DEFAULT)
17 )
18 /

Table created.

SQL>


Note how the P_MISCELL Partition can host multiple values for the REQUEST_STATUS column.
The last Partition, has is specified as a DEFAULT Partition (note that DEFAULT is a keyword, not a value like the others) to hold rows for REQUEST_STATUS for values not mapped to any of the other Partitions.  With List Partitioning, you should always have a DEFAULT Partition (it can have any name, e.g. P_UNKNOWN) so that unmapped rows can be captured.

If you go back to my previous post on Row Movement, you should realise the danger of capturing changing values (e.g. from "SUBMITTED" to "RUNNING" to "COMPLETED") in different Partitions.  What is the impact of updating a Request from the "SUBMITTED" status to the "RUNNING" status and then to the "COMPLETED" status ?  It is not simply an update of the REQUEST_STATUS column alone but a physical reinsertion of the entire row (with the consequent update to all indexes) at each change of status.

SQL> insert into request_queue
2 values (request_id_seq.nextval,systimestamp,101,
3 'FAC1','NOTE',null,'SUBMITTED',null)
4 /

1 row created.

SQL>
SQL> commit;

Commit complete.

.... sometime later ....

SQL> update request_queue
2 set request_status = 'RUNNING'
3 where request_id=1001
4 /
update request_queue
*
ERROR at line 1:
ORA-14402: updating partition key column would cause a partition change


SQL>


So, although now we know that we must ENABLE ROW MOVEMENT, we must suffer the impact of the physical reinsertion of the entire row into a new Partition.

SQL> alter table request_queue enable row movement;

Table altered.

SQL> update request_queue
2 set request_status = 'RUNNING'
3 where request_id=1001
4 /

1 row updated.

SQL> commit;

Commit complete.

SQL>
.... sometime later ....

SQL> update request_queue
2 set request_status = 'COMPLETED',
3 request_completion_time=systimestamp
4 where request_id=1001
5 /

1 row updated.

SQL> commit;

Commit complete.

SQL>


(Note that all the previous "Partitioning 3a to 3d" posts about Indexing apply to List Partitioning as well)



Categories: DBA Blogs

Oracle Core Audit - Do you Audit your Core database engine for breach?

Pete Finnigan - Sat, 2018-09-15 20:26
Oracles core database audit is a useful tool to monitor activity of the core database engine or applications and detect potential abuses. It seems to be a sad fact that with a lot of companies that i visit and from....[Read More]

Posted by Pete On 15/09/18 At 08:28 AM

Categories: Security Blogs

Updating Exadata Software summary

Syed Jaffar - Sat, 2018-09-15 07:06
Updating an Exadata software is one of the crucial tasks for any Database Machine Administrator (DMA). Though not necessarily one has to patch the environments whenever there is a new patch released by Oracle, but, it is highly recommended to patch the systems at least twice a year to fix any known &unknown bugs, security vulnerabilities and other issues.

This blog post summarizes the overall overview of software updates on an Exadata Database Machine. The post explains what components are needed the updates, the update order of components, pre-requisites and etc.

Typically, Exadata database machine updates are divided in the following categories:

  • Exadata Infrastructure Software 
  • Grid Infrastructure and Oracle Database Software

Updating the Exadata Software comprises of following components:

  • Storage Servers
  • Database Servers
  • InfiniBand Switches
Software upgrade for Cell and DB nodes typically contains the updates for the following:
  • OLE OS
  • Exadata Software
  • Firmware (Disk, Flash, RAID Controller, HCA, ILOM etc)

Pre-requisites

The following pre-upgrade activities are highly recommended before upgrading the Exadata software in any environment:

  • Review MOS Doc 888828.1 and download the target version software
  • Download observer.patch.zip from MOS Doc 1553103.1
  • Review MOS Doc 1270094.1 for any critical issues
  • Run the latest version of ExaCHK utility. Fix any FAIL and WARNINGS issues reported in the ExaCHK report. Also, review version recommendations in the MAA scoreboard section
  • Ensure you have latest upgrade/patching utilities, such as, patchmgr, opatch etc. (MOS Doc 1070954.1)
  • Perform prerequisites checks
  • Backup the Exadata database servers before the update 
Rolling vs Non-rolling upgrades

Software updates can be performed online or offline (rolling or non-rolling) fashion. For online updates, it is highly recommended ASM high level disk group redundancy to avoid any data or service loss.

As part of best practices, the following is update order is recommended and treated as safe:
  1. GI and Oracle Database home
  2. Database Servers
  3. Storage Servers
  4. IB Switches
patchmgr update utiity

patchmgr update utility is used to patch the Exadata infrastructure components.  Following are the capabilities of patchmgr:
  • Single invocation for Database servers, storage servers and IB Switches
  • updates firmware, OS and Exadata softwares
  • Online update advantage
Conclusion: Though the procedure looks pretty straight forward & simply when reading, with my past experience, patching each environments comes up with surprises and we need to be ready, unless we are very lucky on the particular day to have a smooth patching experience.

In the upcoming posts, I will talk about how to use patchmgr and other update utilizes to update Exadata software, Database, Storage servers and IB Switches.

Business Logic for Business Object in Visual Builder - Triggers, Object Functions, Groovy and More

Shay Shmeltzer - Fri, 2018-09-14 18:15

The business objects that you create in Visual Builder Cloud Service (VBCS) are quite powerful. Not only can they store data, manage relationships, and give you a rich REST interface for interacting with them, they can also execute dedicated business logic that deals with the data.

If you click on the Business Rules section of a business object you'll see that you can create:

  • Triggers - allow you to react to data events such as insert, update, and delete on records.
  • Object and field Validators - allowing you to make sure that data at the field or record level is correct.
  • Object Functions - A way to define "service methods" that encapsulate logic related to a business object. These functions can be invoked from various points in your application, and also from outside your app.

To code logic in any of these location you will leverage the Groovy language.

I wanted to show the power of some of the functionality you can achieve with these hook points for logic. The demo scenario below is based on a requirement we got from a customer to be able to send an email with the details of all the children records that belong to a specific master record. Imagine a scenario where we have travel requests associated with specific airlines. When we go to delete an airline we want to send an email that will notify someoe about the travel requests that are going to be impacted by this change.

To achieve this I used an accessor - an object that helps you traverse relationships between the two objects - to loop over the records and collect them.

In the video below you'll see a couple of important points:

  • Business object relationship and how to locate the name of an accessor
  • Using a Trigger Event to send an email
  • Passing an object function as a parameter to an email template
  • Coding groovy in a business object

For those interested the specific Groovy code I used is:

def children = TravelRequests; // Accessor name to child collection def ret_val = "List of travel requests "; if (!children.hasNext()) { return "no impact"; } while (children.hasNext()) { def emprec = children.next(); def name = emprec.name; ret_val=ret_val+" " +name; } return ret_val;

 

By the way - if, like me, you come from a background of using Oracle ADF Business Components you might find many of the things we did here quite familiar. That's because we are leveraging Oracle ADF Business Components in this layer of Visual Builder Cloud Service. So looking up old Groovy tutorial and blogs about ADF BC might prove to be useful here too :-)

 

 

Categories: Development

Oracle Linux on Arm (aarch64) update

Wim Coekaerts - Fri, 2018-09-14 10:44

Nothing new to announce but I wanted to take a few minutes to give a little update on where we are with Oracle Linux for Arm. Just a quick summary:

- We have a full version of Oracle Linux 7 (update 5) for Arm. This is freely downloadable from edelivery. The ISO is free download, you can freely use it, you can redistribute it. Just like Oracle Linux x86. No authorization codes, no activation keys. Just download, install and use. Of course, this includes all source code.

- OL7 on Arm uses UEKR5 (4.14.x Linux) including DTrace support (Sometimes I hear people say that UEK is a proprietary kernel. It is not! It is fully open. All the changes, so you actually get to see every single commit of every single change we or others made, not a tar file. it's OPEN)

- there are a ton of packages built for OL/Arm:

ol7_MySQL80/aarch64 MySQL 8.0 for Oracle Linux 7 (aarch64) 32 ol7_developer/aarch64 Oracle Linux 7Server Packages for Develo 15 ol7_developer_EPEL/aarch64 Oracle Linux 7Server EPEL Packages for D 12,410 ol7_developer_UEKR5/aarch64 Oracle Linux 7Server Unbreakable Enterpr 183 ol7_latest/aarch64 Oracle Linux 7Server Latest (aarch64) 8,881 ol7_optional_latest/aarch64 Oracle Linux 7Server Optional Latest (aa 7,246 ol7_software_collections/aarch64 Software Collection Library for Oracle L 136 repolist: 28,903

This includes a ton of EPEL stuff, as you can see above. We have a devtoolset containing gcc 7.3.1 we have support for other languages :golang 1.10, nodejs, python php,...  docker is there... lots of goodies to have a good easy full-fledged development environment.

As a reminder:  if you have an Arm box and you want to use docker -> we have images on docker hub for Arm as well.

you can simply do:

# docker pull oraclelinux:latest

and it pulls in the Arm docker image for Oracle Linux.

 

# docker pull oraclelinux:latest latest: Pulling from library/oraclelinux cd165b3abf95: Download complete [6329822.343702] XFS (dm-3): Mounting V4 Filesystem cd165b3abf95: Extracting 86.45MB/86.45MB cd165b3abf95: Pull complete Digest: sha256:d60084c2aea5fa6cb8ed20c04ea5a8cd39c176c82a9015cc59ad6e860855c27f Status: Downloaded newer image for oraclelinux:latest

 

 

We are proud to announce:

Yann Neuhaus - Fri, 2018-09-14 09:42

Selection_015

(no words required for this post, the image says it all)

 

Cet article We are proud to announce: est apparu en premier sur Blog dbi services.

Parse string then flatten into columns

Tom Kyte - Fri, 2018-09-14 07:46
Hi, LiveSQL link not accepted by the form: https://livesql.oracle.com/apex/livesql/s/g88hb5van1r4ctc65yp4lq9gb I have this situation (see link): <b>ID String</b> Id1 Thing1: Sub1, <br>Thing2: Sub7 ,Sub8 , Sub9 <br>Thing3: Sub12 Id1 Thing...
Categories: DBA Blogs

Oracle View object - performance Issue with Outer Join including a WITH clause

Tom Kyte - Fri, 2018-09-14 07:46
Hi Tom ; Thank you , I've been using your site for 2 years now, resolved many issues based on your answers. Case Scenario : Customer having multiple addresses , only one is active ; some cases ALL the addresses of a customer could be inactive....
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator