DBA Blogs

Change column to row

Tom Kyte - Fri, 2019-11-08 02:48
I have a data like below in a table <code>create table test ( sr varchar2(1),col1 number,col2 number,col3 number ) ; insert into test values ('a',1,2,3); insert into test values ('b',4,5,6); insert into test values ('c',7,8,9);</code> Want...
Categories: DBA Blogs

Which is Bigger – KB or MB?

VitalSoftTech - Tue, 2019-11-05 09:49

A computer noob like myself often gets confused about storage units in terms of the memory of a computer. All these KBs, MBs and GBs boggle my mind. Does that happen to you too? The question that baffles me the most is, which is Bigger – KB or MB? In addition to answering this question, […]

The post Which is Bigger – KB or MB? appeared first on VitalSoftTech.

Categories: DBA Blogs

Bridge network missing Gateway – Docker Issue

DBASolved - Sun, 2019-11-03 11:41

Here is a little something for you.  I’m working on building a demo of Oracle GoldenGate Microservices between three (3) containers. In order to do this, I wanted to setup a dedicated network between the containers. In order to setup a dedicated network, I needed to configure a network for the containers to use.  Docker […]

The post Bridge network missing Gateway – Docker Issue appeared first on DBASolved.

Categories: DBA Blogs

Global non partitioned index on table partitions

Tom Kyte - Fri, 2019-11-01 18:47
Hi, I have recently got some sql statements that is not performing well. <code>select * from v where a=? and b not in(,,,,....) and c =? and rownum<-100 </code> where v is a view. Original sql statement is similar to above statement. From explai...
Categories: DBA Blogs

Event Based Job is not working

Tom Kyte - Fri, 2019-11-01 18:47
Hi, I am struggling several days with following issue. I am trying to implement event based job. At start all was working fine. But after several payload type modifications + several times recreated queue + recreated scheduled job ... schedu...
Categories: DBA Blogs

How to Record your Own Music: a Beginner’s Manual

VitalSoftTech - Thu, 2019-10-31 09:56

If you’re an aspiring musician or independent artist, you might be wondering how to record your own music at home through a DIY music production process. By doing this, you could potentially save a lot of money, time, and effort that goes into producing a song. In the 21st century, there’s an app available for […]

The post How to Record your Own Music: a Beginner’s Manual appeared first on VitalSoftTech.

Categories: DBA Blogs

Database Design Question

Tom Kyte - Thu, 2019-10-31 09:47
Hello, Ask Tom Team. I have a table A with columns (client_id,invoice_number,invoice_type) with a composite primary key (client_id,invoice_number). At business level there are 5 invoice_types (70,71,72,73,74). The invoice_number always brings the ...
Categories: DBA Blogs

finding out the source of the data in a table

Tom Kyte - Thu, 2019-10-31 09:47
Hi Tom, I am working on a database application, and i need to know how a table is being populated in the database schema. I have tried querying xxx_dependencies and xxx_source but of no use. I believe that this table might be populated from an...
Categories: DBA Blogs

How to change the plan of a query in execution??

Tom Kyte - Thu, 2019-10-31 09:47
So I often face an issue when a query generates an execution plan assuming wrong carnality and since it assumes it as 1 it goes into a M3RG3 CART3SIAN join. Now if I gather the stats the query does not automatically picks up the new plan. Is there a ...
Categories: DBA Blogs

List of event codes for traces

Tom Kyte - Thu, 2019-10-31 09:47
Hello Masters, Can you give me a link on docs.oracle.com where are listed all the system event codes for the trace like 10046, 10053... and, most important, with there signification? I know the 10046, 10053 codes but I am sure there are many ...
Categories: DBA Blogs

Performance of a VIEW on multiple tables (historical plus current)

Tom Kyte - Thu, 2019-10-31 09:47
Good afternoon, I have a customer that manage millions of data with a lot of tables. The thing is that we have a big table from 2013 until now which we would like to divided in two, one for the last three months and one for the rest (which we call...
Categories: DBA Blogs

Restrict Application access to developers in same workspace

Tom Kyte - Thu, 2019-10-31 09:47
How to show a user only certain amount of applications in app builder(i.e user should not be able to see all applications in app builder)? In a Oracle Apex workspace, I need to create a new user(admin role) such that the new user can only see sele...
Categories: DBA Blogs

Find if event spanning several dates happened via SQL

Tom Kyte - Thu, 2019-10-31 09:47
Hi Tom, I have data like below <code> event_flag event_date 1 date1 1 date2 0 date3 1 date4 0 date5 0 date6 1 date7 1 date8 1 date9 ...
Categories: DBA Blogs

pragma autonomous_transaction within procedure before creating synonyms

Tom Kyte - Thu, 2019-10-31 09:47
Hi Tom, I have created a stored procedure with in oracle package which creates list of synonyms depending upon the change of dblink server name. I need to execute this procedure to create/replace synonym pointing to another dblink server. My quest...
Categories: DBA Blogs

Undo Tablespaces.

Tom Kyte - Thu, 2019-10-31 09:47
Hi Tom, Waiting to ask u this question. What is a Undo Tablespace in 9i. Is this similar to Rollback Segments. What are NonStandard Block sizes Why that non-Standarad. Why am i not able to create a RS on a Locally Managed Automatically Si...
Categories: DBA Blogs

How to count no of records in table without count?

Tom Kyte - Thu, 2019-10-31 09:47
2)How to count no of records in table without count? ---Actually,this question asked when i had attended an interview in Dell company.I don't know why they people are asked these type of questions,but i said an answer like in my own way. --->...
Categories: DBA Blogs

Square root in excel – A Step-By-Step Tutorial

VitalSoftTech - Wed, 2019-10-30 10:30

Have you ever stopped to wonder what life would have been like when there wasn’t a calculator to perform the arithmetical operations for people? It surely sends a shiver down your spine to even imagine the horror of having to survive without a calculator. But in the present times, it wouldn’t be wrong to state […]

The post Square root in excel – A Step-By-Step Tutorial appeared first on VitalSoftTech.

Categories: DBA Blogs

Top 8 Post Limits on Tumblr (and Other Limitations) Revealed

VitalSoftTech - Tue, 2019-10-29 09:57

Do you know about post limits on Tumblr? Are you aware that the blogging platform has certain rules and regulations which social media users must abide by? Tumblr has become one of the best and most entertaining platforms out there for blogging and sharing multimedia content. It is not just immensely popular amongst the younger […]

The post Top 8 Post Limits on Tumblr (and Other Limitations) Revealed appeared first on VitalSoftTech.

Categories: DBA Blogs

So Far So Good with Force Logging

Bobby Durrett's DBA Blog - Mon, 2019-10-28 18:55

I mentioned in my previous two posts that I had tried to figure out if it would be safe to turn on force logging on a production database that does a bunch of batch processing on the weekend: post1, post2. We know that many of the tables are set to NOLOGGING and some of the inserts have the append hint. We put in force logging on Friday and the heavy weekend processing ran fine last weekend.

I used an AWR report to check the top INSERT statements from the weekend and I only found one that was significantly slower. But the table it inserts into is set for LOGGING, it does not have an append hint, and the parallel degree is set to 1. So, it is a normal insert that was slower last weekend for some other reason. Here is the output of my sqlstatsumday.sql script for the slower insert:

Day        SQL_ID        PLAN_HASH_VALUE Executions Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
---------- ------------- --------------- ---------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
2019-09-22 6mcqczrk3k5wm       472069319        129         36734.0024     20656.8462    462.098677                  0                      0             38.8160385          666208.285         1139.86923             486.323077
2019-09-29 6mcqczrk3k5wm       472069319        130         44951.6935     27021.6031    573.245664                  0                      0             21.8764885           879019.29         1273.52672             522.083969
2019-10-06 6mcqczrk3k5wm       472069319        130         9624.33742     7530.07634    264.929008                  0                      0             1.26370992          241467.023         678.458015             443.427481
2019-10-13 6mcqczrk3k5wm       472069319        130         55773.0864      41109.542    472.788031                  0                      0             17.5326031          1232828.64         932.083969             289.183206
2019-10-20 6mcqczrk3k5wm       472069319        130         89684.8089     59261.2977    621.276122                  0                      0             33.7963893          1803517.19         1242.61069             433.473282
2019-10-27 6mcqczrk3k5wm       472069319        130         197062.591     144222.595    561.707321                  0                      0             362.101267          10636602.9         1228.91603             629.839695

It averaged 197062 milliseconds last weekend but 89684 the previous one. The target table has always been set to LOGGING so FORCE LOGGING would not change anything with it.

One of the three INSERT statements that I expected to be slowed by FORCE LOGGING was faster this weekend than without FORCE LOGGING last weekend:

Day        SQL_ID        PLAN_HASH_VALUE Executions Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
---------- ------------- --------------- ---------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
2019-09-22 0u0drxbt5qtqk       382840242          1         2610257.66         391635    926539.984                  0                      0              13718.453             5483472           745816.5                3689449
2019-09-29 0u0drxbt5qtqk       382840242          1         17127212.3        1507065    12885171.7                  0                      0             14888.4595            18070434          6793555.5             15028884.5
2019-10-06 0u0drxbt5qtqk       382840242          1         3531931.07         420150    2355139.38                  0                      0             12045.0115             5004273            1692754                5101998
2019-10-13 0u0drxbt5qtqk       382840242          1         1693415.59         180730    1250325.41                  0                      0               819.7725           2242638.5           737704.5                2142812
2019-10-20 0u0drxbt5qtqk       382840242          1         5672230.17         536115    3759795.33                  0                      0             10072.9125             6149731            2332038              2806037.5
2019-10-27 0u0drxbt5qtqk       382840242          1         2421533.59         272585    1748338.89                  0                      0               9390.821           3311219.5           958592.5              2794748.5

It ran 2421533 milliseconds this weekend and 5672230 the prior one. So clearly FORCE LOGGING did not have much effect on its overall run time.

It went so well this weekend that we decided to leave FORCE LOGGING in for now to see if it slows down the mid-week jobs and the web-based front end. I was confident on Friday, but I am even more confident now that NOLOGGING writes have minimal performance benefits on this system. But we will let it bake in for a while. Really, we might as well leave it in for good if only for the recovery benefits. Then when we configure GGS for the zero downtime upgrade it will already have been there for some time.

The lesson for me from this experience and the message of my last three posts is that NOLOGGING writes may have less benefits than you think, or your system may be doing less NOLOGGING writes than you think. That was true for me for this one database. It may be true for other systems that I expect to have a lot of NOLOGGING writes. Maybe someone reading this will find that they can safely use FORCE LOGGING on a database that they think does a lot of NOLOGGING writes, but which really does not need NOLOGGING for good performance.

Bobby

Categories: DBA Blogs

Basic Replication -- 10 : ON PREBUILT TABLE

Hemant K Chitale - Mon, 2019-10-28 09:05
In my previous blog post, I've shown a Materialized View that is built as an empty MV and subsequently populated by a Refresh call.

You can also define a Materialized View over an *existing*  (pre-populated) Table.

Let's say you have a Source Table and have built a Replica of it it another Schema or Database.  Building the Replica may have taken an hour or even a few hours.  You now know that the Source Table will have some changes every day and want the Replica to be updated as well.  Instead of executing, say, a TRUNCATE and INSERT, into the Replica every day, you define a Fast Refresh Materialized View over it and let Oracle identify all the changes (which, on a daily basis, could be a small percentage of the total size of the Source/Replica) and update the Replica using a Refresh call.


Here's a quick demo.

SQL> select count(*) from my_large_source;

COUNT(*)
----------
72447

SQL> grant select on my_large_source to hr;

Grant succeeded.

SQL> connect hr/HR@orclpdb1
Connected.
SQL> alter session enable parallel dml;

Session altered.

SQL> create table my_large_replica
2 as select * from hemant.my_large_source
3 where 1=2;

Table created.

SQL> insert /*+ PARALLEL (8) */
2 into my_large_replica
3 select * from hemant.my_large_source;

72447 rows created.

SQL>


So, now, HR has a Replica of the Source Table in the HEMANT schema.  Without any subsequent updates to the Source Table, I create the Materialized View definition, with the "ON PREBUILT TABLE" clause.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> create materialized view log on my_large_source;

Materialized view log created.

SQL> grant select, delete on mlog$_my_large_source to hr;

Grant succeeded.

SQL> connect hr/HR@orclpdb1
Connected.
SQL>
SQL> create materialized view my_large_replica
2 on prebuilt table
3 refresh fast
4 as select * from hemant.my_large_source;

Materialized view created.

SQL> select count(*) from hemant.my_large_source;

COUNT(*)
----------
72447

SQL> select count(*) from my_large_replica;

COUNT(*)
----------
72447

SQL>


I am now ready to add data and Refresh the MV.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> desc my_large_source
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COL NOT NULL NUMBER
PRODUCT_NAME VARCHAR2(128)
FACTORY VARCHAR2(128)

SQL> insert into my_large_source
2 values (74000,'Revolutionary Pin','Outer Space');

1 row created.

SQL> commit;

Commit complete.

SQL> select count(*) from mlog$_my_large_source;

COUNT(*)
----------
1

SQL>
SQL> connect hr/HR@orclpdb1
Connected.
SQL> select count(*) from hemant.my_large_source;

COUNT(*)
----------
72448

SQL> select count(*) from my_large_replica;

COUNT(*)
----------
72447

SQL>
SQL> execute dbms_mview.refresh('MY_LARGE_REPLICA','F');

PL/SQL procedure successfully completed.

SQL> select count(*) from my_large_replica;

COUNT(*)
----------
72448

SQL>
SQL> select id_col, product_name
2 from my_large_replica
3 where factory = 'Outer Space'
4 /

ID_COL
----------
PRODUCT_NAME
--------------------------------------------------------------------------------
74000
Revolutionary Pin


SQL>
SQL> select count(*) from hemant.mlog$_my_large_source;

COUNT(*)
----------
0

SQL>


Instead of rebuilding / repopulating the Replica Table with all 72,448 rows, I used the MV definition and the MV Log on the Source Table to copy over that 1 new row.

The above demonstration is against 19c.

Here are two older posts, one in March 2009 and the other in January 2012 on an earlier release of Oracle.


Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs