Pakistan's First Oracle Blog

Subscribe to Pakistan's First Oracle Blog feed
Blog By Fahd Mirza Chughtai
Updated: 3 weeks 6 days ago

Understanding Nested Lists Dictionaries of JSON in Python and AWS CLI

Sat, 2019-04-20 03:01

After lots of hair pulling, bouts of frustration, I was able to grasp this nested list and dictionary thingie in JSON output of AWS cli commands such as describe-db-instances and others. If you run the describe-db-instances for rds or describe-instances for ec2, you get a huge pile of JSON mumbo-jumpo with all those curly and square brackets studded with colons and commas. The output is heavily nested.


For example, if you do :

aws rds describe-db-instances

you get all the information but heavily nested within. Now if you only want to extract or iterate through, say VPCSecurityGroupId of all database instances, then you have to traverse all that nested information which comprises of dictionary consisting of keys which have values as arrays and those arrays have more dictionaries and so on.

After the above rant, let me try to ease the pain a bit by explaining this. For clarity, I have just taken out following chunk from describe-db-instance output. Suppose, the only thing you are interested in is the value of VpcSecurityGroupId from  following chunk:

mydb=rds.describe_db_instances(DBInstanceIdentifier=onedb)
mydb= {'DBInstances':
          [
            {'VpcSecurityGroups': [ {'VpcSecurityGroupId': 'sg-0ed48bab1d54e9554', 'Status': 'active'}]}
          ]
       }

The variable mydb is a dictionary with key DBInstances. This key DBInstances has an array as its value. Now the first item of that array is another dictionary and the first key of that dictionary is VpcSecurityGroups. Now the value this key VpcSecurityGroups another array. This another array's first item is again a dictionary. This last dictionary has a key VpcSecurityGroupId and we want value of this key.

If your head has stopped spinning, then read on and stop cursing me as I am going to demystify it now.

If you want to print that value just use following command:

mydb['DBInstances'][0]['VpcSecurityGroups'][0]['VpcSecurityGroupId']

So the secret is that if its a dictionary, then use key name and if its an array then use index and keep going. That's all there is to it. Full code to print this using Python, boto3 etc is as follows:

import boto3
import click

rds = boto3.client('rds',region_name='ap-southeast-2')
dbs = rds.describe_db_instances()

@click.group()
def cli():
    "Gets RDS data"
    pass

@cli.command('list-database')
@click.argument('onedb')
def list_database(onedb):
    "List info about one database"
    mydb=rds.describe_db_instances(DBInstanceIdentifier=onedb)
    #print(mydb)
    #Following line only prints value of VpcSecurityGroupId of RDS instance
    print(mydb['DBInstances'][0]['VpcSecurityGroups'][0]['VpcSecurityGroupId'])
    #Following line only prints value of OptionGroup of RDS instance
    print(mydb['DBInstances'][0]['OptionGroupMemberships'][0]['OptionGroupName'])
    #Following line only prints value of Parameter Group of RDS instance
    print(mydb['DBInstances'][0]['DBParameterGroups'][0]['DBParameterGroupName'])

if __name__ == '__main__':
    cli()


I hope that helps. If you know any easier way, please do favor and let us know in comments. Thanks.

Categories: DBA Blogs

AWS CloudFormation Linter Installation and Quick Introduction

Thu, 2019-04-04 23:27
AWS Cloudformation is an evolving managed service which facilitates infrastructure as a code in the cloud. What it means is that you can create AWS resources like EC2 instances, S3 buckets and many more just by writing code instead of using GUI console.



For instances, if you want to create 100 EC2 instances for production, and then later you have to create same for development and for testing and then may be later on, you might need to change the configuration of those EC2 instances then doing that by GUI would be a very tedious task. With CloudFormation, you just describe those EC2 instances once in a file called as template and then run it.

Now another cool thing here is that you don't have to write code for all those 100 EC2 instances in that template. You may just describe one EC2 instance and then use CloudFormation Macros to provision 100 or even more of them  (AWS resource limits apply).

Anyway, CloudFormation templates are either written in JSON or in YAML and they grow big fairly quickly and it becomes hard to keep track of syntactical and other errors. With aws cloudformation validate-template or with GUI you can identify the mal-formatted template file but what about if you have written any property of a resource wrongly? What if the resource specification is not as per AWS resource specification document?

Enters AWS CloudFormation Linter.

Linting is an ancient concept from good old C era which refers to a tool to check the source code for any syntax or bugs. So CloudFormation Linter or cfn-lint is a nifty tool to check for syntactical and bugs in the template. I tried to catch any logical errors with cfn-lint in the template but it was unable to identify them. For example, in my template of building a code pipeline, I omitted the cfn-init signal to let CFN know that launch configuration is complete so that deployment group of CodeDeploy could use it, but it was unable to catch that. So it would be a great future feature for cfn-lint.

Following is an example as how to install it and then I have passed through one of my templates to identify any errors:



Another cool blog post by Chuck Meyer is here about cfn-lint and git pre-commit validation.

Categories: DBA Blogs

Step by Step Troubleshooting and Validation of AWS CloudFormation Yaml template

Mon, 2019-04-01 02:00
CloudFormation (CFN) is infrastructure as a code service of AWS. You just tell CFN your desired state of resources and it creates them in order while resolving dependencies. You mention these resources in a file called as template which can be written in YAML or JSON. YAML being bit more user readable, is widely used now. YAML is great but you have to be aware of its indentation.


Following is a step by step troubleshooting and validation of a simple CFN template by using aws cli from windows 10 command prompt.

Step 1: Create your template file. I am using ec2.yaml which just creates an EC2 instance and a security group with some parameters and outputs.


#####################################################################################################
#  Template : Use to deploy simple T2.MICRO EC2 Instance with a security group
#
#  Author   : Fahd Mirza
#
#  Created on :  23rd March, 2019
#
#####################################################################################################
---
AWSTemplateFormatVersion: "2010-09-09"
Description: Deploy EC2 Instance with a security group
######################################################################################################
Parameters:
######################################################################################################
  InstanceType:
    Type: String
    Default: t2.micro
    AllowedValues:
      - t2.micro
      - t2.2xlarge
  MyRSAKey:
    Type: String
    Description: Supply your RSA Key Name
Mappings:
  RegionMap:
    us-east-1:
      AMI: ami-1853ac65
    us-west-1:
      AMI: ami-bf5540df
    eu-west-1:
      AMI: ami-3bfab942
    ap-southeast-1:
      AMI: ami-e2adf99e
    ap-southeast-2:
      AMI: ami-43874721   
######################################################################################################
Resources:
######################################################################################################
  MySecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Allow SSH to EC2 Bastion Host
      SecurityGroupIngress:
      - IpProtocol: tcp
        FromPort: '22'
        ToPort: '22'
        CidrIp: 0.0.0.0/0
      Tags:
      - Key: Name
        Value: MyBastionSG112
  MyEc2Instance:
    Type: AWS::EC2::Instance
    Metadata:
      AWS::CloudFormation::Init:
        config:
          files:
            /etc/cfn/cfn-hup.conf:
              content: !Sub |
                    [main]
                    stack=${AWS::StackId}
                    region=${AWS::Region}
                    interval=1
              mode: '000744'
              owner: root
              group: root
            /etc/cfn/hooks.d/cfn-auto-reloader.conf:
              content: !Sub |
                     [cfn-auto-reloader-hook]
                     triggers=post.update
                     path=Resources.MyEc2Instance.Metadata.AWS::CloudFormation::Init
                     action=/opt/aws/bin/cfn-init -v --stack ${AWS::StackId} --region ${AWS::Region} --resource MyEc2Instance
                     runas=root
            ~/hello1.txt:
               content: !Sub |
                     hello world, I am from files section of CFN Init metadata.         
          commands:
            RunA:
              command: cat "hello world, I am from command section of CFN Init metadata." > ~/hello.txt
              ignoreErrors: "true"
          services:
              sysvinit:
                  cfn-hup:
                    enabled: true
                    ensureRunning: true
                    files: [/etc/cfn/cfn-hup.conf, /etc/cfn/hooks.d/cfn-auto-reloader.conf]
    Properties:
      Tags:
      - Key: Name
        Value: !Ref "AWS::StackName"
      ImageId:
        Fn::FindInMap:
        - RegionMap
        - !Ref AWS::Region
        - AMI       
      InstanceType:
        Ref: InstanceType
      KeyName:
        Ref: MyRSAKey
      UserData:
        Fn::Base64: !Sub |
              #!/bin/bash -xe
              yum install -y aws-cfn-bootstrap
              chkconfig --add cfn-hup
              /opt/aws/bin/cfn-init -v --stack ${AWS::StackId} --resource MyEc2Instance --region ${AWS::Region}
              /opt/aws/bin/cfn-signal -e 0 --stack ${AWS::StackName} --resource MyEc2Instance --region ${AWS::Region}
    CreationPolicy:
      ResourceSignal:
        Timeout: "PT15M"
######################################################################################################
Outputs:
######################################################################################################
  MyEC2InstancePublicIP:
    Description: My EC2 Instance Public IP
    Value: !GetAtt MyEc2Instance.PublicIp
  MyEC2InstanceID:
    Description: My EC2 Instance ID
    Value: !Ref MyEc2Instance
  MyEC2SecGroup:
    Description: My EC2 Security Group
    Value: !Ref MySecurityGroup

Step 2: When I first ran the validation on this template, it gave me error about its indentation like following. I fixed the indentation  by using simple editor. You can also use any online editor or Visual code studio.

C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>dir
 Volume in drive C is Windows
 Volume Serial Number is 5285-4635

 Directory of C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates

23/03/2019  12:54 PM              .
23/03/2019  12:54 PM              ..
23/03/2019  12:49 PM             4,360 ec2.yaml
23/03/2019  12:54 PM             2,461 my.yaml
               2 File(s)          6,821 bytes
               2 Dir(s)  807,032,090,624 bytes free

C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>aws cloudformation validate-template --template-body file://my.yaml

An error occurred (ValidationError) when calling the ValidateTemplate operation: [/Parameters/Mappings] 'null' values are not allowed in templates

C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>
C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>
C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>aws cloudformation validate-template --template-body file://ec2.yaml

An error occurred (ValidationError) when calling the ValidateTemplate operation: Invalid template parameter property 'RegionMap'

C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>aws cloudformation validate-template --template-body file://ec2.yaml

Step 3: Then I had declared a resource in resources section but it wasn't present in the Parameters section. So I removed it from parameters.

An error occurred (ValidationError) when calling the ValidateTemplate operation: Template format error: Unresolved resource dependencies [MyVPCId] in the Resources block of the template

Step 4: Now it works like a charm.

C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>
C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>
C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>
C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>aws cloudformation validate-template --template-body file://ec2.yaml
Deploy EC2 Instance with a security group
PARAMETERS              Supply your RSA Key Name        False   MyRSAKey
PARAMETERS      t2.micro                False   InstanceType

C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>

Categories: DBA Blogs

When AWS SCT Unable to Connect to MySQL in RDS or EC2

Fri, 2019-03-22 05:27
AWS Schema Conversion Tool (SCT) is one of the must tool for a successful migration of databases to AWS RDS.

If you are trying to connect to MySQL hosted on EC2 instance or RDS and unable to make the connection despite of setting the security groups correctly and making sure that port, hostname, username and password are right, then first check the log of SCT.




2019-03-22 19:40:16.866 [   1]     GENERAL INFO    global_settings:
    app_folder=AWS Schema Conversion Tool
    bucket-name=sct-cloud9-test
    console_log_folder=C:\Users\mirza\AWS Schema Conversion Tool\Log
    convert_without_statistic_reminder=false
    data_migration_settings=
    db2luw_driver_file=
    debug_mode=false
    ext_pack_version_alert=true
    facade_log_folder=C:\Users\mirza\AWS Schema Conversion Tool\Log
    file-system-thread-count=1
    forget-state=false
    greenplum_driver_file=
    install_dir=C:\Users\mirza
    log.max_file_size_mb=100
    log.max_files_count=20
    log_folder=C:\Users\mirza\AWS Schema Conversion Tool\Log
    log_folder_extractors=C:\Users\mirza\AWS Schema Conversion Tool\Extractor Log
    log_folder_extractors_ask_before_load=false
    log_folder_extractors_cassandra=C:\Users\mirza\AWS Schema Conversion Tool\Extractor Log\Cassandra
    maria_driver_file=
    message_level.CLONE_DATACENTER=INFO
    message_level.COMPARE_SCHEMA=INFO
    message_level.CONTROLLER=INFO
    message_level.GENERAL=INFO
    message_level.LOADER=INFO
    message_level.PARSER=INFO
    message_level.PRINTER=INFO
    message_level.RESOLVER=INFO
    message_level.TELEMETRY=INFO
    message_level.TRANSFORMER=INFO
    message_level.TYPEMAPPING=INFO
Caused by: com.amazon.sct.dbloader.DbLoaderHandledException: Connection wasn't established. Check connection properties.
at com.amazon.sct.dbloader.DbLoader.checkConnection(DbLoader.java:512)
at com.amazon.sct.dbloader.DbLoader.connect(DbLoader.java:349)
at com.amazon.sct.dbloader.DbLoaderContainer.checkAccessibility(DbLoaderContainer.java:25)
at com.amazon.sct.task.TestConnectionTask.call(TestConnectionTask.java:26)
at com.amazon.sct.task.TestConnectionTask.call(TestConnectionTask.java:12)
at javafx.concurrent.Task$TaskCallable.call(Task.java:1423)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 1 more

In this case, there are no errors about time out which commonly refers to security group configuration or any about the credential issue. Here the issue is not with SCT or any of AWS component, rather its about the user which you are using to connect to MySQL.

Create user MySQL as follows and try and it should work:

CREATE USER 'myuser'@'localhost' IDENTIFIED BY 'mypass';
CREATE USER 'myuser'@'%' IDENTIFIED BY 'mypass';

GRANT ALL ON *.* TO 'myuser'@'localhost';
GRANT ALL ON *.* TO 'myuser'@'%';

Then first test connection using MySQL workbench and then try with SCT.

Hope it helps.

Categories: DBA Blogs

Monitoring Database in AWS Aurora After Migrating from Oracle to PostgreSQL

Fri, 2019-03-15 19:08
Suppose you have an Oracle database on-premise, which you have now moved over to AWS Cloud in AWS Aurora PostgreSQL. 
For your Oracle database, you have been using v$ views to monitor your runtime performance of instance, long running operations, top SQLs from ASH, blocking etc. How do you continue doing that when you migrate your database to cloud especially in AWS Aurora based PostgreSQL?

Well, PostgreSQL provides statistics collection views, which is a subsystem that collects runtime dynamic information about certain server activities such as statistical performance information. For example, you can use  pg_stat_activity view to check for long running queries.

There are various other statistics views too in PostgreSQL such as pg_stat_all_tables to see size of table, access method like FTS or index scan, and so on. There are other views to check IO on tables and indexes and plethora of others.

In addition to these statistics views, Aurora PostgreSQL provides a nifty tool called as Performance Insights. Performance insights monitors Amazon RDS or Aurora databases (both MySQL and PostgreSQL) and captures workloads so that you can analyze and troubleshoot database performance. Performance insights visualizes the database load and provides very useful filtering using various attributes such as: waits, SQL statements, hosts, or users.

As part of operational excellence, its imperative after a database migration that performance is monitored, documented and continuously improved. Performance Insights and the statistics views are great for proactive and reactive database tuning in AWS RDS and AWS Aurora.
Categories: DBA Blogs

AWS Support Knowledge Center

Sun, 2019-02-24 19:29
In addition to AWS documentation and blogs, one of the best resource is AWS Knowledge Center. It contains frequently asked questions from AWS customers so this resource contains real world problems with their solution.


AWS Knowledge Center contains FAQ from almost all of the AWS services. For example, I was researching about if there was a way to restore or recover a terminated EC2 instance in case I didn't have any backups or AMIs, and all the EBS volumes were deleted and the Answer was No.

The great thing was that AWS did mention the best practice in that case about taking backups or making AMIs of instances or better yet enabling the termination protection.

AWS Knowledge Center is quite a rich resource and like everything else with AWS, its continuously evolving.
Categories: DBA Blogs

Updating Oracle Opatch with AWS Systems Manager Run Command at Scale

Sun, 2019-02-10 02:16
AWS Systems Manager (SSM) is a managed service by AWS, which is used to configure and manage EC2 instances, other AWS resources, and on-premise VMs/servers at scale. SSM frees you up from having ssh or bastion host access to the remote resources.


Pre-requisites of SSM:

The managed instances need to have SSM agent running.
The managed instances need to be assigned an IAM role with policy AmazonEC2RoleforSSM.
The managed insatnces need to have a meaningful tag assigned to them to make it possible to manage them in bulk.

Example:

This example assumes that above pre-requisites are already there. For step by step instructions as how to do that, check this resource (https://aws.amazon.com/getting-started/tutorials/remotely-run-commands-ec2-instance-systems-manager/). This also assumes that all the instances
have been assigned tags like Env=Prod or Env=Dev.

Following is the script update_opatch.sh which was already bootstrapped to the EC2 instance at time of creation in userdata,
so its already present at /u01/scripts/update_opatch.sh

#/usr/bin/env bash -x
ORACLE_SID=`ps -ef | grep pmon | grep -v asm | awk '{print $NF}' | sed s'/pmon_//' | egrep "^[+]"`
export ORAENV_ASK=NO
. oraenv > /dev/null 2>&1
mv $ORACLE_HOME/OPatch $ORACLE_HOME/OPatch.$(date)
curl -T /tmp/ -u test@test.com ftps://
mv /tmp/p6880880_101000_linux64.zip $ORACLE_HOME
cd $ORACLE_HOME
unzip p6880880_101000_SOLARIS64.zip


Now just running following command in Systems Manager will update opatch on all the managed instances with tag Prod.

aws ssm send-command --document-name "AWS-RunShellScript" --comment "update_opatch.sh" --parameters commands=update_opatch.sh Key=tag:Env,Values=Prod


Categories: DBA Blogs

awk -F option

Thu, 2019-01-24 21:25
I often need to extract subset of data from one command so that it could be used in another during the DBA tasks performed through shell scripts.

One such requirement is that when you need to feed the name of ASM disks to the querydisks command.


For example, if you want to see the mapping of ASM disks to the devices at OS level, you need to run following command:

oracleasm querydisk -d

Now in order to get ASM diskname, you need to run

oracleasm listdisks

Now a sample output from oracleasm querydisk would be :

Disk "MYDISK" is a valid ASM disk on device [3,7]

In the above output, the 3 is the minor and 7 is major number of disk, which is often visible in OS level commands like iostat. How would you extract that 3 and 7 from above output?

Well, one way is to use awk -F command like following:

In order to extract 3, you would do:

Cat Disk "MYDISK" is a valid ASM disk on device [3,7] | awk -F[ '{print $2}' | awk -F, '{print $1}'

In order to extract 7, you would do:

Cat Disk "MYDISK" is a valid ASM disk on device [3,7] | awk -F[ '{print $2}' | awk -F, '{print $2}'

Hope that helps
Categories: DBA Blogs

Step by Step Installation of PostgreSQL 11 on Oracle Linux 7

Fri, 2019-01-11 02:30
This is quick spit-out of commands, I used on my test virtual machine to install PostgreSQL 11 on Oracle Enterprise Linux 7.



[root@docker ~]# curl -O https://download.postgresql.org/pub/repos/yum/11/redhat/rhel-7-x86_64/pgdg-oraclelinux11-11-2.noarch.rpm
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  4952  100  4952    0     0   1889      0  0:00:02  0:00:02 --:--:--  1890
[root@docker ~]# rpm -ivh pgdg-oraclelinux11-11-2.noarch.rpm
warning: pgdg-oraclelinux11-11-2.noarch.rpm: Header V4 DSA/SHA1 Signature, key ID 442df0f8: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:pgdg-oraclelinux11-11-2          ################################# [100%]
[root@docker ~]# yum list postgres*
Loaded plugins: langpacks, ulninfo
pgdg11                                                   | 4.1 kB     00:00   
(1/2): pgdg11/7Server/x86_64/primary_db                    | 141 kB   00:02   
(2/2): pgdg11/7Server/x86_64/group_gz                      |  245 B   00:02   
Available Packages
postgresql.i686                     9.2.24-1.el7_5           ol7_latest       
postgresql.x86_64                   9.2.24-1.el7_5           ol7_latest       
postgresql-contrib.x86_64           9.2.24-1.el7_5           ol7_latest       
postgresql-devel.i686               9.2.24-1.el7_5           ol7_latest       
postgresql-devel.x86_64             9.2.24-1.el7_5           ol7_latest       
postgresql-docs.x86_64              9.2.24-1.el7_5           ol7_latest       
postgresql-jdbc.noarch              42.2.5-1.rhel7.1         pgdg11           
postgresql-jdbc-javadoc.noarch      42.2.5-1.rhel7.1         pgdg11           
postgresql-libs.i686                9.2.24-1.el7_5           ol7_latest       
postgresql-libs.x86_64              9.2.24-1.el7_5           ol7_latest       
postgresql-odbc.x86_64              09.03.0100-2.el7         ol7_latest       
postgresql-plperl.x86_64            9.2.24-1.el7_5           ol7_latest       
postgresql-plpython.x86_64          9.2.24-1.el7_5           ol7_latest       
postgresql-pltcl.x86_64             9.2.24-1.el7_5           ol7_latest       
postgresql-server.x86_64            9.2.24-1.el7_5           ol7_latest       
postgresql-static.i686              9.2.24-1.el7_5           ol7_latest       
postgresql-static.x86_64            9.2.24-1.el7_5           ol7_latest       
postgresql-test.x86_64              9.2.24-1.el7_5           ol7_latest       
postgresql-unit11.x86_64            7.0-1.rhel7              pgdg11           
postgresql-unit11-debuginfo.x86_64  7.0-1.rhel7              pgdg11           
postgresql-upgrade.x86_64           9.2.24-1.el7_5           ol7_optional_latest
postgresql11.x86_64                 11.1-1PGDG.rhel7         pgdg11           
postgresql11-contrib.x86_64         11.1-1PGDG.rhel7         pgdg11           
postgresql11-debuginfo.x86_64       11.1-1PGDG.rhel7         pgdg11           
postgresql11-devel.x86_64           11.1-1PGDG.rhel7         pgdg11           
postgresql11-docs.x86_64            11.1-1PGDG.rhel7         pgdg11           
postgresql11-libs.x86_64            11.1-1PGDG.rhel7         pgdg11           
postgresql11-llvmjit.x86_64         11.1-1PGDG.rhel7         pgdg11           
postgresql11-odbc.x86_64            11.00.0000-1PGDG.rhel7   pgdg11           
postgresql11-plperl.x86_64          11.1-1PGDG.rhel7         pgdg11           
postgresql11-plpython.x86_64        11.1-1PGDG.rhel7         pgdg11           
postgresql11-pltcl.x86_64           11.1-1PGDG.rhel7         pgdg11           
postgresql11-server.x86_64          11.1-1PGDG.rhel7         pgdg11           
postgresql11-tcl.x86_64             2.4.0-2.rhel7.1          pgdg11           
postgresql11-test.x86_64            11.1-1PGDG.rhel7         pgdg11           
postgresql_anonymizer11.noarch      0.2.1-1.rhel7            pgdg11           
[root@docker ~]#
[root@docker ~]#
[root@docker ~]# yum list postgres* | grep
anaconda-ks.cfg                     .bashrc                             .dbus/                              .pki/
.bash_history                       .cache/                             initial-setup-ks.cfg                .tcshrc
.bash_logout                        .config/                            .local/                             .viminfo
.bash_profile                       .cshrc                              pgdg-oraclelinux11-11-2.noarch.rpm  .xauth5hcDeF
[root@docker ~]# yum list postgres* | grep 11
postgresql-jdbc.noarch              42.2.5-1.rhel7.1         pgdg11           
postgresql-jdbc-javadoc.noarch      42.2.5-1.rhel7.1         pgdg11           
postgresql-unit11.x86_64            7.0-1.rhel7              pgdg11           
postgresql-unit11-debuginfo.x86_64  7.0-1.rhel7              pgdg11           
postgresql11.x86_64                 11.1-1PGDG.rhel7         pgdg11           
postgresql11-contrib.x86_64         11.1-1PGDG.rhel7         pgdg11           
postgresql11-debuginfo.x86_64       11.1-1PGDG.rhel7         pgdg11           
postgresql11-devel.x86_64           11.1-1PGDG.rhel7         pgdg11           
postgresql11-docs.x86_64            11.1-1PGDG.rhel7         pgdg11           
postgresql11-libs.x86_64            11.1-1PGDG.rhel7         pgdg11           
postgresql11-llvmjit.x86_64         11.1-1PGDG.rhel7         pgdg11           
postgresql11-odbc.x86_64            11.00.0000-1PGDG.rhel7   pgdg11           
postgresql11-plperl.x86_64          11.1-1PGDG.rhel7         pgdg11           
postgresql11-plpython.x86_64        11.1-1PGDG.rhel7         pgdg11           
postgresql11-pltcl.x86_64           11.1-1PGDG.rhel7         pgdg11           
postgresql11-server.x86_64          11.1-1PGDG.rhel7         pgdg11           
postgresql11-tcl.x86_64             2.4.0-2.rhel7.1          pgdg11           
postgresql11-test.x86_64            11.1-1PGDG.rhel7         pgdg11           
postgresql_anonymizer11.noarch      0.2.1-1.rhel7            pgdg11           
[root@docker ~]#
[root@docker ~]#
[root@docker ~]# yum install postgresql11-server
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package postgresql11-server.x86_64 0:11.1-1PGDG.rhel7 will be installed
--> Processing Dependency: postgresql11-libs(x86-64) = 11.1-1PGDG.rhel7 for package: postgresql11-server-11.1-1PGDG.rhel7.x86_64
--> Processing Dependency: postgresql11(x86-64) = 11.1-1PGDG.rhel7 for package: postgresql11-server-11.1-1PGDG.rhel7.x86_64
--> Processing Dependency: libpq.so.5()(64bit) for package: postgresql11-server-11.1-1PGDG.rhel7.x86_64
--> Running transaction check
---> Package postgresql11.x86_64 0:11.1-1PGDG.rhel7 will be installed
---> Package postgresql11-libs.x86_64 0:11.1-1PGDG.rhel7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================================================
 Package                                   Arch                         Version                                  Repository                    Size
====================================================================================================================================================
Installing:
 postgresql11-server                       x86_64                       11.1-1PGDG.rhel7                         pgdg11                       4.7 M
Installing for dependencies:
 postgresql11                              x86_64                       11.1-1PGDG.rhel7                         pgdg11                       1.6 M
 postgresql11-libs                         x86_64                       11.1-1PGDG.rhel7                         pgdg11                       359 k

Transaction Summary
====================================================================================================================================================
Install  1 Package (+2 Dependent packages)

Total download size: 6.7 M
Installed size: 29 M
Is this ok [y/d/N]: y
Downloading packages:
(1/3): postgresql11-libs-11.1-1PGDG.rhel7.x86_64.rpm                                                                         | 359 kB  00:00:04   
(2/3): postgresql11-11.1-1PGDG.rhel7.x86_64.rpm                                                                              | 1.6 MB  00:00:04   
(3/3): postgresql11-server-11.1-1PGDG.rhel7.x86_64.rpm                                                                       | 4.7 MB  00:00:02   
----------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                               974 kB/s | 6.7 MB  00:00:07   
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Installing : postgresql11-libs-11.1-1PGDG.rhel7.x86_64                                                                                        1/3
  Installing : postgresql11-11.1-1PGDG.rhel7.x86_64                                                                                             2/3
  Installing : postgresql11-server-11.1-1PGDG.rhel7.x86_64                                                                                      3/3
  Verifying  : postgresql11-libs-11.1-1PGDG.rhel7.x86_64                                                                                        1/3
  Verifying  : postgresql11-11.1-1PGDG.rhel7.x86_64                                                                                             2/3
  Verifying  : postgresql11-server-11.1-1PGDG.rhel7.x86_64                                                                                      3/3

Installed:
  postgresql11-server.x86_64 0:11.1-1PGDG.rhel7                                                                                                   

Dependency Installed:
  postgresql11.x86_64 0:11.1-1PGDG.rhel7                                 postgresql11-libs.x86_64 0:11.1-1PGDG.rhel7                               

Complete!
[root@docker ~]#

Categories: DBA Blogs

Quickly Download Oracle Software From My Oracle Support

Thu, 2018-11-22 18:19
In order to quickly download Oracle software from My Oracle Support (MOS) to your server or local host, you must have curl or wget installed.

Check these by using:

[oracle@test working]$ which curl
/usr/bin/curl
[oracle@test working]$ which wget
/usr/bin/wget




Now use following command from the operating system prompt to download orachk, opatch, any Oracle patch or any other software by using your MOS credentials.

Right click on the software link and copy the link e.g. For orachk , this is the link:

https://updates.oracle.com/Orion/Services/download/orachk_beta.zip?aru=21110110&patch_file=orachk_beta.zip

Now use the above link in following command as follows:

wget --http-user=<MOS User Name> --ask-password "<File URL>" -O <File Name>

Example:

wget --http-user=test@mos.com --ask-password "https://updates.oracle.com/Orion/Services/download/orachk_beta.zip?aru=21110110" -O orachk_beta.zip

Hope that helps.
Categories: DBA Blogs

Graceful Stop of Goldengate Extract

Wed, 2018-11-07 18:58
It's always a good idea to stop extracts after checking if there is any long running transaction in the database being captured. Failing to do so might later result in hung or unstable processes.





Use following command to check the extract:

GGSCI (test) 3> send e* status

Sending STATUS request to EXTRACT ext ...


EXTRACT ext (PID 16649)
Current status: In recovery[1]: Processing data with empty data queue

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

testsrv:/u05/ggate> grep -i bound dirrpt/ext.rpt
Bounded Recovery Parameter:
2017-06-07 15:48:18 INFO OGG-01639 BOUNDED RECOVERY: ACTIVE: for object pool 2: p2628_Redo_Thread_2.
2017-06-07 15:48:18 INFO OGG-01640 BOUNDED RECOVERY: recovery start XID: 129.15.2953694.

-- And then Check RBA is moving with following commands
./ggsci
info e*
lag e*

Categories: DBA Blogs

Goldengate ERROR OGG-02037 Failed to retrieve the name of a missing Oracle redo log.

Wed, 2018-11-07 18:37
One extract got abended and wasn't able to start in Oracle Goldengate Version 12.2.0.1.160517 23194417_FBO. The redologs were present but extract was still abended and threw following error in the report file.

Error:

ERROR OGG-02037 Failed to retrieve the name of a missing Oracle redo log.

Solution:




The solution for this error is to unregister, register and then start the extract as follows:


GGSCI (test) 6> unregister extract ext database 2018-11-07 17:07:03 INFO OGG-01750 Successfully unregistered EXTRACT ext from database. GGSCI (test) 7> GGSCI (test) 7> register extract ext database 2018-11-07 17:07:56 INFO OGG-02003 Extract ESTATDEV successfully registered with database at SCN 1373637632014. GGSCI (test) 8> start extract ext

Hope this helps.
Categories: DBA Blogs

Step by Step: Ansible Role To Setup Oracle ACFS On Multiple Nodes

Thu, 2018-09-06 17:46
This post contains step by step instructions for creating an Ansible role acfssetup to setup Oracle ASM Cluster Filesystem (ACFS) on multiple nodes of a cluster. This assumes that Grid Infrastructure 12.1.0.2.0 is already installed on the nodes, and ASM is working fine. This also assumes that there already is Ansible installed on some controller host with ssh equivalency setup between root and Oracle users.




Step 1: Create directory structure for the role acfssetup:


$ cd /etc/ansible/roles
$ mkdir acfssetup
$ mkdir files handlers meta templates tasks vars

Step 2: Create the Tasks (/etc/ansible/roles/acfssetup/tasks/main.yml):

---

- name: Install ACFS/ADVM modules on the nodes
  become_user: "{{ superuser }}"
  environment: "{{ asm_instance }}"
  shell:"{ gi_home_path }}/bin/acfsroot install"
  tags:acfs

- name: Start and enable the ACFS modules on the nodes
  become_user: "{{ superuser }}"
  environment: "{{ asm_instance }}"
  shell:"{ gi_home_path }}/bin/acfsload start"
  shell:"{ gi_home_path }}/bin/acfsroot enable"
  tags:acfs
 
- name: As oracle user, create an ASM volume for ACFS on first node
  when: inventory_hostname in groups['node1']
  become_user: "{{ gi_owner }}"
  environment: "{{ asm_instance }}"
  shell:"{ gi_home_path }}/bin/asmcmd volcreate -G {{ asm_dg_name }} -s {{ acfs_vol_size }} {{ acfs_vol_name }}"
  shell:"{ gi_home_path }}/bin/asmcmd volinfo -G {{ asm_dg_name }} {{ acfs_vol_name }} | grep Device | sed 's/.*://'"
  register: {{ device_name }}
  tags:acfs

- name: As oracle user, create the filesystem on the volume which was just created
  become_user: "{{ gi_owner }}"
  environment: "{{ asm_instance }}"
  shell:"/sbin/mkfs -t acfs {{ device_name }}.stdout"
  tags:acfs

- name: As root, create an empty directory which will house the file system
  become_user: "{{ superuser }}"
  environment: "{{ asm_instance }}"
  shell:"mkdir -p /{{ acfs_mount_name }}/{{ acfs_vol_name }}; chown root:oinstall /{{ acfs_mount_name }}; chmod 770 /{{ acfs_mount_name }}; chown -R oracle:oinstall /{{ acfs_mount_name }}/{{ acfs_vol_name }}; chmod 775 /{{ acfs_mount_name }}/{{ acfs_vol_name }}"
  tags:acfs

- name: As root, setup the file system to be auto mounted by clusterware
  become_user: "{{ superuser }}"
  environment: "{{ asm_instance }}"
  shell:"{ gi_home_path }}/bin/srvctl add volume -volume {{ acfs_vol_name }} -diskgroup {{ asm_dg_name }} -device {{ device_name }}.stdout; { gi_home_path }}/bin/srvctl add filesystem -device {{ device_name }}.stdout -path {{ acfs_mount_name }}/{{ acfs_vol_name }} -diskgroup {{ asm_dg_name }} -user {{ gi_owner }} -fstype ACFS -description \"ACFS General Purpose Mount\""
  tags:acfs

  Step 3: Create the Variables (/etc/ansible/roles/acfssetup/vars/main.yml):

ver: "12.1.0.2.0"
superuser: root
asm_instance: +ASM
asm_dg_name: DATA
acfs_vol_name: ACFSVOL1
acfs_vol_size: 10G
acfs_mount_name: acfsmounts
device_name: default([])
gi_owner: oracle
gi_group: oinstall
gi_base_path: "/u01/app/oracle"
gi_home_path: "{{ gi_base_path }}/product/{{ ver |
regex_replace('^(.*)\\.(.*)\\.(.*)\\.(.*)$', '\\1.\\2.\\3') }}/grid"
gi_home_name: "OraGI{{ ver | regex_replace('^(.*)\\.(.*)\\.(.*)\\.(.*)$', '\\1\\2')}}"

Step 4: Configure Ansible host file (/etc/ansible/hosts)

node1 ansible_host=node1.foo.com
node2 ansible_host=node2.foo.com

Step 5: Create the skeleton Playbook (/etc/ansible/acfs.yml):

---
- hosts: all
become: true
roles:
- acfssetup

Step 6: Run the playbook

$ ansible-playbook acfs.yml
Categories: DBA Blogs

OPS$Oracle user after Wallet Creation in Oracle 12c

Fri, 2018-08-10 00:37

----- In Oracle 12.1.0.2, created the wallet by using below commands:

TEST$ orapki wallet create -wallet "/u01/app/oracle/admin/TEST/wallet" -pwd ****  -auto_login_local
Oracle PKI Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.


TEST$ mkstore -wrl "/u01/app/oracle/admin/TEST/wallet" -createCredential TEST2 sys ********
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
Create credential oracle.security.client.connect_string1

----- But when I logged into the database with sys user, the show user showed OPS$ORACLE user instead of sys:

TEST$ sqlplus /@TEST2

SQL*Plus: Release 12.1.0.2.0 Production on Thu Aug 9 13:09:38 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Thu Aug 09 2018 03:18:20 -04:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> sho user
USER is "OPS$ORACLE"
SQL>

----- So made following changes and it worked fine:

Put following entry in sqlnet.ora file:

SQLNET.WALLET_OVERRIDE = TRUE
The SQLNET.WALLET_OVERRIDE entry allows this method to override any existing OS authentication configuration.

and used mkstore to create the wallet:

TEST$  mkstore -wrl "/u01/app/oracle/admin/TEST/wallet" -createCredential TEST2 sys

Categories: DBA Blogs

Network Slowness Caused Database Contention That Caused Goldengate Lag

Mon, 2018-07-30 00:55
I got paged for a goldengate extract lagging behind. Checked the extract configuration and it was normal extract and it seemed stuck without giving any error in the ggserr.log or anywhere else. It wasn't abended either and was in running state.


Tried stopping and restating it, but still it remained in running state while doing nothing and lag was increasing. So the issue was clearly outside of goldengate. Checked the database by starting from alert log and didn't see any errors there either.

Jumped into the database and ran some queries to see which sessions were active and what they were running. After going through various active sessions, turned out that few of them were doing long transactions over a dblink and these sessions were several hours old and seemed stuck. These sessions were also inducing widespread delay on the temp tablespace and were blocking other sessions. Due to undersized temp plus these stuck long running transactions, database performance was also slower than usual.

Ran a select statement over that dblink and it was very slow. Used tnsping to ping that database remotely and it returned with delay. Then used network commands like ping, tracert, etc to check network status and it all was pointing to delay in network.

Killed the long running transaction as it was going nowhere, and that eased the pressure on temp tablespace, which in return enabled extract to finish off the lag.
Categories: DBA Blogs

Log Buffer #546: A Carnival Of The Vanities For DBAs

Mon, 2018-07-30 00:38
This Log Buffer Edition covers Cloud, Oracle, and PostgreSQL.
Cloud:
Google Maps platform now integrated with the GCP Console
Getting more value from your Stackdriver logs with structured data
Improving application availability with Alias IPs, now with hot standby
Performing a large-scale principal component analysis faster using Amazon SageMaker
Optimized TensorFlow 1.8 now available in the AWS: deep learning AMIs to accelerate training on Amazon EC2 C5 and P3 instances


Oracle:
Using GoldenGate LogDump to find bad data
Partition-Wise Operations: new features in 12c and 18c
SOA Suite 12c in Docker containers: only a couple of commands, no installers, no third party scripts
Checking if the current user is logged into Application Builder
PostgreSQL:
Let’s start out with some fun! I really enjoyed Wendy Kuhn‘s article on May 5 about the history of PostgreSQL. She starts out by relaying the importance of learning the history behind new technical tools & concepts when you’re learning. I couldn’t agree more.
Speaking of history, I’ve been waiting for the right time to mention this fun article from August 2016. Now is the time, because it relates to the previous article and because I saw a few retweets last week mentioning it. Did you ever wonder why the PostgreSQL logo is an elephant? Or what his name is?? Or even better – did turtles or cheetahs ever represent PostgreSQL???? Patrycja Dybka answers these questions and more. Check it out!
Ok, moving on to the serious stuff. :) First off, we’ve got a new round of minor releases of PostgreSQL. Version 10.4, 9.6.9, etc, were released on May 10. Time to start planning those upgrade cycles!
Next up, JD posted a nice summary of PGConf US in New Jersey on May 7. I saw a lot of familiar faces in his pictures! One quick call-out: I heard good things about the speed mentoring at the career fair. I think that was a great idea. (Among many at PGConf.)
Another interesting thing JD touched on in his blog post was the growing role of larger companies in the community. He gave a few specific examples related to Google and Microsoft. Back on April 17, Pivotalpublished an article listing a number of specific ways they contribute to PostgreSQL development, as well.
Speaking of cloud companies, who doesn’t like a nice rowdy comparison? Over on the SeveralNines blog, we got exactly that on May 1: a quick side-by-side comparison of a few of the many cloud providers who ship PostgreSQL. There are a bunch more – feel free to leave comments on their blog post with the providers they left out!
As long as we’re doing comparisons, I saw this old website on Twitter last week, and it’s fun enough to pass along. Thomas Kellerer from Germany wrote a nice open-source tool called SQL Workbench/J. In the process of supporting many different databases, he’s learned a lot about the differences between them. And his website has a really detailed list. Check out this list of SQL features by database – PostgreSQL is looking good!
I always enjoy a good story. Singapore-based Ashnik recently published a new case study about a global insurance company who deployed a bank data exchange system on PostgreSQL: a fine example of the serious business that runs on PostgreSQL every day.
Moving into the technology space, infrastructure company Datrium has recently published a series of interesting articles about the benchmarking and heavyweight workloads they’re throwing at PostgreSQL. The most recent article on April 25 discusses PostgreSQL on bare metal and it has links to many previous articles.
In the category of query tuning, how would you like to make a small tweak to your schema and SQL, then experience a 290x speedup? That’s exactly what happened to Yulia Oletskaya! She writes about it in this article on May 7.
“What’s common between DBA and detective? They both solve murder and mystery while trying to make sense of the nonsense.” That’s the first sentence of Alexey Lesovsky’s April 17 article about troubleshooting a PostgreSQL crash.
Going a little deeper, I have a handful of recent articles about specific database features in PostgreSQL.
First, how about a demonstration of PostgreSQL’s top-notch built-in support for full-text search? What better example than analyzing the public email of PostgreSQL contributor Tom Lane to find what his waking hours are? Turns out that he’s very consistent. In fact, it turns out you can use Tom Lane’s consistent email habits to spot server timezone misconfigurations.
Citus also published a nice article back at the beginning of April about row-level security. I didn’t include it last month but it’s worth mentioning now. PostgreSQL’s capabilities here are quite nice.
My past newsletters have been following Dimitri Fontaine’s series on PostgreSQL data types. We’ve got three new ones this time around: JSONEnum and Point types.
A big selling point for PostgreSQL is its extensibility. On May 8, Luca Ferrari from Italy published an article in BSD magazine which walked through the process of building a customer extension to provide a new foreign data wrapper that connects the database directly to a file system data source.
Our friends at Timescale put out an article about streaming replication on May 3. Lee Hampton gives one of the best descriptions of this critical HA concept that I’ve seen anywhere.
Finally, can a week go by without new articles about vacuum in PostgreSQL? It seems not!
On Apr 30, Jorge Torralba published an article on DZone about tuning autovacuum. He has a specific focus on bloat, which is an important reason for vacuuming. There are some nice examples here.
And back on April 3, Emily Chang from Datadog published perhaps one of the most detailed articles about vacuum that I’ve seen. Well worth reviewing.
To close up this edition: something a little different. This year marks the 15th anniversary of pgpool. And Tatsuo Ishii reminded us with a blog post on April 15.
So in honor of the 15th aniversary, let’s collect a few recent links *just* about pgpool!
Tatsuo also published two other articles in April about various features of pgpool:
And Vladimir Svedov at severalnines published a two-part series on pgpool in April as well.
And that’s a wrap for this week. Likely more content than you’ll have time for, as usual! My job here is done. :)

Originally posted at https://blog.pythian.com/log-buffer-546-carnival-vanities-dbas/
Categories: DBA Blogs

How to find the UUID of a device in Linux for Oracle ASM

Sun, 2018-05-27 22:45
UUID stands for Universally Unique Identifier. I use UUID for my disk device, when I need to create and add disks for Oracle ASM, as UUID is independet of device name or mountpoint. So its always a good idea to include UUID of device in the fstab file in Linux.

So here is how to find the UUID of a device in Linux for Oracle ASM:




[root@bastion ~]$ ls -l /dev/disk/by-uuid
lrwxrwxrwx 1 root root 11 JAN 18 20:38 1101c254-0b92-42ca-b34a-6d283bd2d31b -> ../../sda2
lrwxrwxrwx 1 root root 11 JAN 18 20:38 11dc5104-C07b-4439-bdab-00a76fcb88df -> ../../sda1

HTH.


Categories: DBA Blogs

Relocate Goldengate Processes to Other Node with agctl

Fri, 2018-04-20 22:00
Oracle Grid Infrastructure Agents can be used to manage Oracle Goldengate through Oracle GI. agctl is the utility to add, modify and relocate the goldengate. These Oracle GI agents can also be used with other products like weblogic, mysql etc. 


Frits has a good article about installation and general commands regarding GI agents for a non-clustered environment.

Following is the command to relocate Goldengate processes to other node with agctl. 


[gi@hostname ~]$ agctl status goldengate [service_name]
[gi@hostname ~]$ agctl config goldengate [service_name] [gi@hostname ~]$ agctl relocate goldengate [service_name] --node [node_name] [gi@hostname ~]$ agctl config goldengate [service_name] [gi@hostname ~]$ agctl status goldengate [service_name]

Hope that helps.
Categories: DBA Blogs

Oracle DBAs and GDPR

Wed, 2018-04-18 01:32
The General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679) is a regulation by which the European Parliament, the Council of the European Union and the European Commission intend to strengthen and unify data protection for all individuals within the European Union (EU).


To bring Oracle database to align with GDPR directive, we have to encrypt all the databases and files on disk, aka encryption at rest (when data is stored). We also have to encrypt the database network traffic. 

The Transparent Data Encryption (TDE) feature allows sensitive data to be encrypted within the datafiles to prevent access to it from the operating system. 

You cannot encrypt an existing tablespace. So if you wish to encrypt existing data, you need to move them from unencrypted tablespaces to encrypted tablespaces. For doing this you can use any of following methods:

i) Oracle Data Pump utility.
ii) Commands like CREATE TABLE...AS SELECT...
iii) Move tables like ALTER TABLE...MOVE..  or rebuild indexes.
iv) Oracle Table Redefinition.

In order encrypt network traffic between client and server we have 2 options from Oracle:

i) Native Network Encryption for Database Connections
ii) Configuration of TCP/IP with SSL and TLS for Database Connections

Native Network Encryption is all about setting sqlnet.ora file and doesn't have the overhead of second option whereyou have to configure various network files at server and client and also have to obtain certificates and create wallet. In first option, there is possibility of not gurantee of encryption, whereas in second there is gurantee of encryption. 
Categories: DBA Blogs

AWS Pricing Made Easy By Simple Monthly Calculator

Wed, 2018-04-18 01:26
With ever changing pricing model and services, its hard to keep track of AWS costing.





If you want to check how much would it cost to have a certain AWS service, tailored to your requirement then use the following Simply Monthly Calculator from AWS.

AWS Price Calculator.
Categories: DBA Blogs

Pages