Pakistan's First Oracle Blog

Subscribe to Pakistan's First Oracle Blog feed
Blog By Fahd Mirza Chughtai
Updated: 11 hours 25 min ago

Solution of Nuget Provider Issue with PowerShell and AWS Tools

Wed, 2021-02-24 20:08

 On a AWS EC2 Windows 2012 server, my goal was to write some data to S3 bucket. I was using a small Powershell Script to copy the file to the S3 bucket. For that I needed to Install AWS Tools for Powershell and I used following command at Powershell prompt running as administrator:

Windows PowerShell

Copyright (C) 2016 Microsoft Corporation. All rights reserved.


PS C:\Users\SRV> Install-Module -Scope CurrentUser -Name AWSPowerShell.NetCore -Force

and it failed with following error:

NuGet provider is required to continue

PowerShellGet requires NuGet provider version '2.8.5.201' or newer to interact with NuGet-based repositories. The NuGet

 provider must be available in 'C:\Program Files\PackageManagement\ProviderAssemblies' or

'C:\Users\SRV\AppData\Local\PackageManagement\ProviderAssemblies'. You can also install the NuGet provider

by running 'Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force'. Do you want PowerShellGet to install

 and import the NuGet provider now?

[Y] Yes  [N] No  [S] Suspend  [?] Help (default is "Y"): y

WARNING: Unable to download from URI 'https://go.microsoft.com/fwlink/?LinkID=627338&clcid=0x409' to ''.

WARNING: Unable to download the list of available providers. Check your internet connection.

PackageManagement\Install-PackageProvider : No match was found for the specified search criteria for the provider

'NuGet'. The package provider requires 'PackageManagement' and 'Provider' tags. Please check if the specified package

has the tags.

Solution:

The solution is to enable TLS 1.2 on this Windows host, which you can do by running Powershell in administrator mode:


Set-ItemProperty -Path 'HKLM:\SOFTWARE\Wow6432Node\Microsoft\.NetFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -Type DWord


Close your Powershell window, and reopen as administrator and check if TLS protocol is present by typing following command on PS prompt:

[Net.ServicePointManager]::SecurityProtocol

If the above shows Tls12 in the output, then we are all good and now you should be able to install AWS Tools.

I hope that helps.




Categories: DBA Blogs

Boto3 Dynamodb TypeError: Float types are not supported. Use Decimal types instead

Mon, 2021-02-22 01:26

 I was trying to ram data into AWS dynamodb via Boto3 and the streaming failed due to following error:


  File "C:\Program Files\Python37\lib\site-packages\boto3\dynamodb\types.py", line 102, in serialize

    dynamodb_type = self._get_dynamodb_type(value)

  File "C:\Program Files\Python37\lib\site-packages\boto3\dynamodb\types.py", line 115, in _get_dynamodb_type

    elif self._is_number(value):

  File "C:\Program Files\Python37\lib\site-packages\boto3\dynamodb\types.py", line 160, in _is_number

    'Float types are not supported. Use Decimal types instead.')

TypeError: Float types are not supported. Use Decimal types instead.



I was actually getting some raw data points from cloudwatch for later analytics. These datapoints were in float format which are not supported by Dynamodb. Now instead of importing some decimal libraries or doing JSON manipulation, you can solve above with simple Python format expression like this:

"{0:.2f}".format(datapoint['Average'])

It worked like a charm afterwards. I hope that helps.
Categories: DBA Blogs

Main SQL Window Functions for Data Engineers in Cloud

Fri, 2021-02-19 22:36

 To become a data engineer in cloud requires to have a good grasp of SQL among various other things. SQL is the premier tool for interacting with data sets. At first it seems daunting to see all those SQL analytics functions, but if you start with a tiny dataset like in the examples below and understand how these functions work, then it all becomes very easy for large datasets of any volume.

Once you know the basic structure of SQL, understand the basic clauses, then its time to jump into the main analytics functions. Below I have used SQL's With clause to generate a tiny dataset in Oracle. You don't have to create a table, load it with sample data and play with it. Just use with clause with the accompanying select statements which demonstrate you the common SQL Window functions.


1- In this example, sum and row_number functions works on each row of whole window.

   

With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,sum(t) over () as SumEachRow, row_number() over (order by t) as RN from x;


2- In this example, sum and row_number functions works on each row of each partition of whole window. This window is partitioned on column t.


With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,sum(t) over (partition by t) as SumEachRow, row_number() over (partition by t order by t) as RN from x;


3- In following example, we have divided the window into 2 partitions by using case statement within partition clause. One partition is when t=1, and other partition is composed of rest of rows.


With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,sum(t) over (partition by CASE WHEN t = 1 THEN t ELSE NULL END) as SumEachRow, row_number() over (partition by CASE WHEN t = 1 THEN t ELSE NULL END order by t) as RN from x;


4- Below example is variant of example 3. In this the window function row_number is working on whole window instead of partition whereas the window function sum is working on partitions.


With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,sum(t) over (partition by CASE WHEN t = 1 THEN t ELSE NULL END) as SumEachRow, row_number() over (order by t) as RN from x;


5- This example uses lag function to return previous value of window function. For lag function, the value for first row is always null as there is no previous value.


With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,lag(t) over (order by t) as Previous_t from x;


6- This example uses lead function to return next value of window function. For lead function, the value of last row is always null as there is no next value.


With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,lead(t) over (order by t) as Next_t from x;


7- This example shows that First_value function returns first value in window for each row.


With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,first_value(t) over (order by t) as First_t from x;


8- This example shows that First_value function returns first value in each partition of window for each row.

With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,first_value(t) over (partition by t order by t) as First_t from x;


9- This example shows that last_value function returns last value in window for each row.


With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,last_value(t) over (order by t ROWS BETWEEN

           UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as Last_t from x;


10- This example shows that Last_value function returns last value in each partition of window for each row.

With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,last_value(t) over (partition by t order by t ROWS BETWEEN

           UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as Last_t from x;


For explanation of rows between unbounded clause, see this 

11- This example shows the rank() function which is useful for Top N, or Bottom N sort of queries. Following is for whole window. The main idea is that rank starts from 1 from first row and then rank remains same for rows with same value within window. When value changes, the rank increments as per number of lines from top. 

With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,rank(t) over (order by t) as Rank from x;


12- This example shows the rank() function which is useful for Top N, or Bottom N sort of queries. Following is for each partition of window.


With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,rank() over (partition by t order by t) as Rank from x;


PS. Yes I know formatting of code chunks is not good enough but this is limitation of blogger platform it seems and another note to self that I need to move to a better one.

Categories: DBA Blogs

Docker Behind Proxy on CentOS - Solution to Many Issues

Thu, 2021-01-28 22:50

If you running docker behind proxy on CentOS and receiving timeout or network errors, then use below steps to configure proxy settings on your CentOS box where docker is installed and you are trying to build docker image:

Login as the user which is going to build image


Create directory with sudo

    Sudo mkdir -p /etc/systemd/system/docker.service.d


Create file for http proxy setting

    /etc/systemd/system/docker.service.d/http-proxy.conf

    and insert following content into it:

    [Service]

    Environment="HTTP_PROXY=http://yourproxy.com:80/"


Create file for https proxy setting

    /etc/systemd/system/docker.service.d/https-proxy.conf

    and insert following content into it:

    [Service]

    Environment="HTTPS_PROXY=https://yourproxy.com:80/"


Restart the systemctl daemon

systemctl daemon-reload


Restart the docker:

service docker restart


Also if you are trying to install Yarn or NPM within your dockerfile , then within your docker file define following environment variables

ENV http_proxy=http://yourproxy.com

ENV https_proxy=http://yourproxy.com

ENV HTTP_PROXY=http://yourproxy.com

ENV HTTPS_PROXY=http://yourproxy.com


Notice that only specify http protocol both for https and http proxy. 

I hope that helps.

Restart docker again.


Categories: DBA Blogs

Most Underappreciated AWS Service and Why

Tue, 2021-01-05 17:11

Who wants to mention in their resume that one of their operation task is to tag the cloud resources? Well I did and mentioned that one of the tools I used for that purpose was Tag Editor. Interviewer was surprised to learn that there was such a thing in AWS which allowed tagging multiple resource at once. I got the job due to this most under-appreciated and largely unknown service.

Tagging is boring but essential. As cloud matures, tagging is fast becoming an integral part of it. In the environments I manage, most of tagging management is automated but there is still a requirement at times for manual bulk tagging and that's where Tag Editor comes very handy. Besides of bulk tagging Tag Editor enables you to search for the resources that you want to tag, and then manage tags for the resources in your search results.

There are various other tools available from AWS to ensure tag compliance and management but the reason why I like Tag Editor most is its ease of use and a single pane of window to search resources by tag keys, tag values, region or resource types. It's not as glamorous as AWS Monitron, AWS Proton or AWS Fargate but as useful as any other service is.

In our environment, if its not tagged then its not allowed in the cloud. Tag Editor addresses the basics of being in cloud. Get it right, and you are well on your way to well-architected cloud infrastructure.

Categories: DBA Blogs

From DBA to DBI

Mon, 2020-10-19 18:48

Recently Pradeep Parmer at AWS had a blog post about transitioning from DBA to DBI or in other words from database administrator to database innovator. I wonder what exactly is the difference here as any DBA worth his or her salt is an innovator in itself.

Administering a database is not about sleepily issuing backup commands or in terms of Cloud managed databases clicking here and there. Database administration has evolved over time just like other IT roles and is totally different what it was few years back. 

Regardless of the database engine you use, you have to have a breadth of knowledge about operating systems, networking, automation, scripting, on top of database concepts. With managed database services in cloud like AWS RDS or GCP Cloud SQL or Big Query many of the skills have become outdated but new ones have sprung up. That has always  been the case with DBA field. 

Taking the example of Oracle; what we were doing in Oracle 8i became obsolete in Oracle 11g and Oracle 19c  is a totally different beast. Oracle Exadata, RAC, various types of DR services, fusion middleware are in itself a new ballgame with every version. 

Even with managed database services, the role of DBA has become more involved in terms of migrations and then optimizing what's running within the databases from stopping the database costs going through the roof.

So the point here is that DBAs have always been innovators. They have always been trying to find out new ways to automate the management and healing of their databases. They always are under the pressure to eke out last possible optimization out of their system and that's still the case even if those databases are supposedly managed by cloud providers. 

With purpose built databases which are addressed different use case for different database technology the role of DBA has only become more relevant as they have to evolve to address all this graph, in-memory, and other cool nifty types of databases.

We have always been innovators my friend. 

Categories: DBA Blogs

What is Purpose Built Database

Mon, 2020-10-05 17:30

 In simple words, a general Database Engine is a big clunky piece of software with features for all the use cases, and its up to you to choose which features to use. Whereas in a purpose built database, you get a lean, specific database which is only suitable for the feature you want.

For instance, AWS offers 15 purpose-built database engines including relational, key-value, document, in-memory, graph, time series, and ledger databases. GCP also provides multiple databases types like Spanner, BigQuery etc. 

But the thing is that the one-size-fits-all monolithic databases aren't going anywhere. They are here to stay. A medium to large organization has way too many requirements and features to be used and having one database for every use case increases the footprint and cost. For every production database, there is a dev, test, and QA database so the foot print keeps increasing.

So the thing is that though having purpose built database notion is great it's not going to throw monilithic database out of the window. It just provides another option for the organization and they could just have a managed service for purpose built database for a specialized use case but for a general database requirement for OLTP and data warehouse, monilithic is still the way.

Categories: DBA Blogs

5 Important Steps Before Upgrading Oracle on AWS RDS

Sat, 2020-09-26 23:03

 Even though AWS RDS (relational database service) is a managed service which means that you won't have to worry about upgrades, patches and other tidbits, you still have the option of manually triggering the upgrade at time of your choice.

Upgrading an Oracle database is quite critical not only for the database itself but more importantly for the dependent applications. It's very important to try out any upgrade on RDS on a test representative system before hand to iron out any wrinkles and check the timings and any other potential issues. 

There are 5 important steps before upgrading Oracle on AWS RDS you can take to make this process more risk-free, speedy, and reliable:

  1. Check Invalid objects such as procedures, functions, packages etc in your database.
  2. Make a list of the objects which are still invalid and if possible delete them to remove clutter.
  3. Disable and remove audit logs if they are stored in database
  4. Convert dbms_jobs Jobs and other stuff to dbms_scheduler
  5. Take Snapshot of your production database right before you upgrade to speed up the upgrade process as then during upgrade only delta snapshot will be taken
I hope that helps.

Categories: DBA Blogs

Choice State in AWS Step Functions

Thu, 2020-09-17 02:47

Richly asynchronous server-less applications can be built by using AWS step functions. Choice State in AWS Step Functions is the newest feature which was long awaited.

In simply words, we define steps and their transitions and call it State Machine as a whole. In order to define this state machine, we use Amazon States Language (ASL). ASL is a JSON-based structured language that defines state machines and collections of states that can perform work (Task states), determines which state to transition to next (Choice state), and stops execution on error (Fail state). 

So if the requirement is to add a branching logic like if-then-else or case statement in our state transition, then Choice state comes handy. The choice state introduces various new operators into the ASL and the sky is now limit with the possibilities. Operators for choice state include comparison operators like Is Null, IsString etc, Existence operators like Ispresent, glob wildcards where you match some string and also variable string comparison.

Choice State enables developers to simplify existing definitions or add dynamic behavior within state machine definitions. This makes it easier to orchestrate multiple AWS services to accomplish tasks. Modelling complex workflows with extended logic is now possible with this new feature.

Now one hopes that AWS introduces doing it all graphically instead of dabbling into ASL.

Categories: DBA Blogs

CloudFormation Template for IAM Role with Inline Poicy

Tue, 2020-08-18 21:10
I struggled with this a bit to create a cloudformation template for IAM role with inline policy with IAM user as principal. So here it is as a quick reference:


    AWSTemplateFormatVersion: 2010-09-09
Parameters:
vTableName:
Type: String
Description: the tablename
Default: arn:aws:dynamodb:ap-southeast-2:1234567:table/test-table
vUserName:
Type: String
Description: New account username
Default: mytestuser
Resources:
DynamoRoleForTest:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
AWS:
- !Sub 'arn:aws:iam::${AWS::AccountId}:user/${vUserName}'
Action:
- sts:AssumeRole
Path: /
Policies:
- PolicyName: DynamoPolicy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- dynamodb:BatchGet*
- dynamodb:DescribeStream
- dynamodb:DescribeTable
- dynamodb:Get*
- dynamodb:Query
- dynamodb:Scan
Resource: !Ref vTableName
I hope that helps. Thanks.
Categories: DBA Blogs

How to Read Docker Inspect Output

Fri, 2020-08-14 21:52

Here is quick easy set of instructions as how to read docker inspect output:

First you run the command:

docker inspect <image id> or <container id>

and then it outputs in JSON format. Your normally are interested in what exactly is in this docker image which you have just pulled from web or inherited in your new job. 

Now copy this JSON output and put it in VSCode or any of online JSON editor of your choice. For a quick glance, look at the node "ContainerConfig." This node tells you what exactly was run within the temporary container which was used to build this image such as CMD, EntryPoint etc. 

In addition to the above, following is the description of all the important bits of information found in Inspect command output:

  • ID: It's unique identifier of the image.
  • Parent: A link to the identifier of the parent image of this image. 
  • Container: The temporary container created when the image was built.
  • ContainerConfig: Contains what happened in that temporary container.
  • DockerVersion: Version of Docker used to create the image

Virtual Size: Image size in bytes.

I hope that helps.

Categories: DBA Blogs

Installing Docker on Amazon Linux 2

Thu, 2020-08-13 00:50
Installing docker on Amazon Linux 2 is full of surprises which are not easy to deal with. I just wanted to test something within a container environment, so spun up a new EC2 instance from the following AMI:

Amazon Linux 2 AMI (HVM), SSD Volume Type - ami-0ded330691a314693 (64-bit x86) / ami-0c3a4ad3dbe082a72 (64-bit Arm)

After this Linux instance came up, I just did yum update to get all the latest stuff:

 sudo yum update

All good so far.
Then I installed/checked yum-utils and grabbed the docker repo, and all good there:

[ec2-user@testf ~]$ sudo yum install -y yum-utils
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
Package yum-utils-1.1.31-46.amzn2.0.1.noarch already installed and latest version
Nothing to do

[ec2-user@testf ~]$ sudo yum-config-manager \
>     --add-repo \
>     https://download.docker.com/linux/centos/docker-ce.repo
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo


Now, it's time to install docker:

[ec2-user@testf ~]$ sudo yum install docker-ce docker-ce-cli containerd.io
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
amzn2-core                                                                                                               | 3.7 kB  00:00:00
docker-ce-stable                                                                                                         | 3.5 kB  00:00:00
(1/2): docker-ce-stable/x86_64/primary_db                                                                                |  45 kB  00:00:00
(2/2): docker-ce-stable/x86_64/updateinfo                                                                                |   55 B  00:00:00
Resolving Dependencies
--> Running transaction check
---> Package containerd.io.x86_64 0:1.2.13-3.2.el7 will be installed
--> Processing Dependency: container-selinux >= 2:2.74 for package: containerd.io-1.2.13-3.2.el7.x86_64
---> Package docker-ce.x86_64 3:19.03.12-3.el7 will be installed
--> Processing Dependency: container-selinux >= 2:2.74 for package: 3:docker-ce-19.03.12-3.el7.x86_64
--> Processing Dependency: libcgroup for package: 3:docker-ce-19.03.12-3.el7.x86_64
---> Package docker-ce-cli.x86_64 1:19.03.12-3.el7 will be installed
--> Running transaction check
---> Package containerd.io.x86_64 0:1.2.13-3.2.el7 will be installed
--> Processing Dependency: container-selinux >= 2:2.74 for package: containerd.io-1.2.13-3.2.el7.x86_64
---> Package docker-ce.x86_64 3:19.03.12-3.el7 will be installed
--> Processing Dependency: container-selinux >= 2:2.74 for package: 3:docker-ce-19.03.12-3.el7.x86_64
---> Package libcgroup.x86_64 0:0.41-21.amzn2 will be installed
--> Finished Dependency Resolution
Error: Package: containerd.io-1.2.13-3.2.el7.x86_64 (docker-ce-stable)
           Requires: container-selinux >= 2:2.74
Error: Package: 3:docker-ce-19.03.12-3.el7.x86_64 (docker-ce-stable)
           Requires: container-selinux >= 2:2.74
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest


and it failed. 

So googled the error Requires: container-selinux and every stackoverflow post and blogs say to download the new rpm from some centos or similar mirror but it simply doesn't work, no matter how hard you try. 

Here is the ultimate best solution which enabled me to get docker installed on Amazon Linux 2 on this EC2 server:

sudo rm /etc/yum.repos.d/docker-ce.repo

sudo amazon-linux-extras install docker

sudo service docker start

[ec2-user@~]$ docker --version

Docker version 19.03.6-ce, build 369ce74


That's it. I hope that helps.
Categories: DBA Blogs

Quick Intro to BOTO3

Mon, 2020-08-10 03:37

 I just published my very first tutorial video on youtube which lists down a quick introduction to AWS BOTO3 with a step by step walkthrough of a simple program. Please feel free to subscribe to my channel. Thanks. You can find video here.

Categories: DBA Blogs

Checklist While Troubleshooting Workload Errors in Kubernetes

Fri, 2020-08-07 02:21

 Following is the checklist while troubleshooting workload/application errors in Kubernetes:

1- First check how many nodes are there

2- What namespaces are present

3- In which namespace , the faulty application is

4- Now check faulty app belongs to which deployment

5- Now check which replicaset (if any) is party of that deployment

6- Then check which pods are part of that replicaset

7- Then check which services are part of that namespace

8- Then check which service correspond to the deployment where our faulty application is 

9- Then make sure label selectors in deployment to pod template are correct

10- Then ensure label selector in service to deployment are correct.

11- Then check that servicename if referred in any deployment is correct. For example, webserver pod is referring to database host (which will be the servicename of database) in env of pod template is correct.

12- Then check that ports are correct in clusterIP or nodeport services. 

13- Check if the status of pod is running

14- check logs of pods and containers

I hope that helps and feel free to add any step or thought in the comments. Thanks.

Categories: DBA Blogs

Different Ways to Access Oracle Cloud Infrastructure

Thu, 2020-08-06 09:00

This is a quick jot down of different ways you can access the ever-improving Oracle Cloud Infrastructure (OCI). Most types of Oracle Cloud Infrastructure resources have a unique, Oracle-assigned identifier called an Oracle Cloud ID (OCID).

You can access Oracle Cloud Infrastructure using the Console (a browser-based interface) or the REST API. To access the Console, you must use a supported browser. You can go to the sign-in page. You will be prompted to enter your cloud tenant, your user name, and your password. The Oracle Cloud Infrastructure APIs are typical REST APIs that use HTTPS requests and responses.

All Oracle Cloud Infrastructure API requests must be signed for authentication purposes. All Oracle Cloud Infrastructure API requests must support HTTPS and SSL protocol TLS 1.2. Oracle Cloud Infrastructure provides a number of Software Development Kits (SDKs) and a Command Line Interface (CLI) to facilitate development of custom solutions.

Software Development Kits (SDKs) Build and deploy apps that integrate with Oracle Cloud Infrastructure services. Each SDK provides the tools you need to develop an app, including code samples and documentation to create, test, and troubleshoot. In addition, if you want to contribute to the development of the SDKs, they are all open source and available on GitHub.

  • SDK for Java
  • SDK for Python
  • SDK for TypeScript and JavaScript
  • SDK for .NET
  • SDK for Go
  • SDK for Ruby

Command Line Interface (CLI) The CLI provides the same core capabilities as the Oracle Cloud Infrastructure Console and provides additional commands that can extend the Console's functionality. The CLI is convenient for developers or anyone who prefers the command line to a GUI.

Categories: DBA Blogs

Oracle 11g on AWS RDS Will Be Force Upgraded in Coming Months

Thu, 2020-08-06 00:51
To make a long story short: If you have Oracle 11g running on AWS RDS, then start thinking, planning, and implementing it's upgrade to a later version, preferably Oracle 19c. 

This is what AWS has to say about this:

Oracle has announced the end date of support for Oracle Database version 11.2.0.4 as December 31, 2020, after which Oracle Support will no longer release Critical Patch Updates for this database version. Amazon RDS for Oracle will end support for Oracle Database version 11.2.0.4 Standard Edition 1 (SE1) for License Included (LI) model on October 31, 2020. For the Bring Your Own License (BYOL) model, Amazon RDS for Oracle will end the support for Oracle Database version 11.2.0.4 for all editions on December 31, 2020. All 11.2.0.4 SE1 LI instances will be automatically upgraded to 19c starting on November 1, 2020. Likewise, the 11.2.0.4 BYOL instances will be automatically upgraded to 19c starting on January 1, 2021. We highly recommend you upgrade your existing Amazon RDS for Oracle 11.2.0.4 DB instances and validate your applications before the automatic upgrades begin. 

The bit which probably would apply to most of enterprise customers who are running Oracle 11g with BYOL license is this:

January 1, 2021Amazon RDS for Oracle starts automatic upgrades of DB instances restored from snapshots to 19c
Instead of leaving to the last minute, its better to upgrade it sooner. There are lots of things which need to be taken into consideration for this upgrade within and outside of the database. If you need any hand with that, feel free to reach out.
Categories: DBA Blogs

Oracle Cloud's Beefed Up Security

Wed, 2020-08-05 01:23
During the first few months of the COVID-19 pandemic, many organizations expected a slowdown in their digital transformation efforts. But surprisingly, things haven't slowed down in many places instead, many enterprises accelerated their use of cloud-based services to help them manage and address emerging priorities in the new normal, which includes a distributed workforce and new digital strategies. 

More and more companies, especially those in regulated industries, want to adopt the latest cloud technologies, but they often face barriers due to strict data privacy or compliance requirements. As cloud adoption grows, we’re seeing exponential growth in cloud resources. With this we’re also seeing growth in permissions, granted to humans and workloads, to access and change those resources. This introduces potential risks, including the misuse of privileges, that can compromise your organization’s security.

To mitigate these risks, ideally every human or workload should only be granted the permissions they need, at the time they need them. This is the security best practice known as “least privilege access.” Oracle Cloud Infrastructure Identity and Access Management (IAM) lets you control who has access to your cloud resources. You can control what type of access a group of users have and to which specific resources. 

Compartments are a fundamental component of Oracle Cloud Infrastructure for organizing and isolating your cloud resources. You use them to clearly separate resources for the purposes of measuring usage and billing, access (through the use of policies), and isolation (separating the resources for one project or business unit from another). A common approach is to create a compartment for each major part of your organization. 

The first step in establishing least privilege is understanding which permissions a user has today and which have been used recently. Then, you need to understand which permissions this user is likely to need in the future, so you avoid getting into a manually intensive trial-and-error loop of assigning incremental permissions. Once you have that, you need to decide how to construct your identity and access management (IAM) policies so that you can reuse roles across several compartments.

In the Console, you view your cloud resources by compartment. This means that after you sign in to the Console, you'll choose which compartment to work in (there's a list of the compartments you have access to on the left side of the page). Notice that compartments can be nested inside other compartments. The page will update to show that compartment's resources that are within the current region. If there are none, or if you don't have access to the resource in that compartment, you'll see a message.

This experience is different when you're viewing the lists of users, groups, dynamic groups, and federation providers. Those reside in the tenancy itself (the root compartment), not in an individual compartment.

As for policies, they can reside in either the tenancy or a compartment, depending on where the policy is attached. Where it's attached controls who has access to modify or delete it. 
Categories: DBA Blogs

Oracle Cloud for Existing Oracle Workloads

Mon, 2020-07-27 19:57
As the technology requirements of your business or practice grow and change over time, deploying business-critical applications can increase complexity and overhead substantially. This is where Oracle Cloud can assist the organization in an optimum and cost effective way.


To help manage this ever-growing complexity, organizations need to select a cloud solution which is similar to their existing on-prem environments. Almost all the serious enterprise outfits are running some sort of Oracle workload and it only makes sense for them to select Oracle cloud in order to leverage what they already know in a better and modern way. And they can utilize this architecture best practices to help you build and deliver great solutions.

Cost management, operational excellence, performance efficiency, reliability, and security are hallmarks of Oracle cloud plus some more. Oracle databases are already getting complex and autonomous. They are now harder to manage and that is why it only make sense to migrate them over to the Oracle cloud and let Oracle handle all the nitty gritty.

Designing and deploying a successful workload in any environment can be challenging. This is especially true as agile development and DevOps/SRE practices begin to shift responsibility for security, operations, and cost management from centralized teams to the workload owner. This transition empowers workload owners to innovate at a much higher velocity than they could achieve in a traditional data center, but it creates a broader surface area of topics that they need to understand to produce a secure, reliable, performant, and cost-effective solution.

Every company is on a unique cloud journey, but the core of Oracle is same.



Categories: DBA Blogs

ADB-ExaC@C ? What in the Heck Oracle Autonomous Database is?

Sat, 2020-07-25 23:51
ADB-ExaC@C ? I would love to see the expressions on the face of Corey Quinn when he learns about this naming convention used by Oracle for their Exadata in Cloud offering.

Since Oracle 10g, we have been hearing about self-managed, self-healing, and self-everything Oracle database. Oracle 10g was touted as self-healing one and if you have managed Oracle 7,8i,9i, this was infarct true how much pain 10g had taken away.

But 10g was far from self-managed or autonomous in other words. Autonomous means that you wouldn't have to manage anything and database would run by itself. Once you switch it on (or it could even that by itself), it would be on it's own. This wasn't the case with 10g, 11g, 12c, 18c, etc. Database administrators were still in vogue.

With everything moving over to cloud, is that still the case? Or in other words, with this autonomous band wagon of Oracle plus their cloud offerings, is autonomous database a reality now?

So what in the heck Oracle autonomous database is? Autonomous Database delivers a machine-learning driven, self-managed database capability that natively builds in Oracle’s extensive technology stack and best practices for self-driving, self-securing and self-repairing operation.

Oracle says that their Autonomous Database is completely self-managed, allowing you to focus on business innovations instead of technology and is consumed in a true pay-per-use subscription model to lower operational cost. Yes, we have heard almost similar claims with previous versions, but one main difference here is that this one is in the cloud.

Well, if you have opted for Exadata in Oracle's cloud then its true up to a great extent. Oracle Autonomous Database on Exadata Cloud@Customer (ADB-ExaC@C) is here and as Oracle would be managing it, you wouldn't have to worry about its management. But if its autonomous why would anyone including Oracle would manage it? Shouldn't it be managing itself?

So this autonomous ADB-ExaC@C provides you something Architectural Identicality which can be easily achived by anything non-autonomous. They say its elastic as it can auto scale up and down. I think AWS Aurora, GCP Big Query is doing that for some time now. Security patching, upgrades, backups, are all behind the scene and automated for this ADB-ExaC@C. I am still at loss as what really makes it autonomous here.

Don't get me wrong. I am huge fan of Exadata despite of its blood-curdling price. Putting Exadata in Cloud and offering it as a service is a great idea too as this would enable many businesses to use it. My question is simple: ADB-ExaC@C is a managed service for sure, but what makes it autonomous?
Categories: DBA Blogs

What's Different About Oracle's Cloud

Sat, 2020-07-25 23:28
Cloud infrastructure is the foundation to powering your SaaS applications. The cloud infrastructure supporting a SaaS application is the engine that provides the security, scale, and performance for your business applications. It includes the database, operating systems, servers, routers, and firewalls (and more) required to process billions of application transactions every day.


In the words of Larry Ellison, "The main economic benefit of Oracle’s Gen 2 Cloud Infrastructure is its autonomous capability, which eliminates human labor for administrative tasks and thus reduces human error. That capability is particularly important in helping prevent data theft against increasingly sophisticated, automated hacks."

But with an outdated, overly complex ERP system, the organization found it a challenge to efficiently provide financial information. For one thing, its heavily manual processes resulted in a lack of confidence in data, making it hard to drive productivity and service improvements. By insisting on zero customization of its Oracle Cloud applications, organizatons across the world ensure that regular updates are simple and that its processes are integrated and scalable. As a result, the utility has shortened its order lead times significantly, reduced customer complaints, and boosted overall customer satisfaction levels.

Oracle’s second-generation cloud offers autonomous operations that eliminate human error and provide maximum security, all while delivering truly elastic and serverless services with the highest performance—available globally both in the public cloud and your data centers.
Categories: DBA Blogs

Pages