Translate

Search This Blog

Don't assume!

We can not assume in same way we don't believe in rule of thumbs!

consider query:

select c1, sum(c2)
from t1
group by c1

this query returns result set in sorted order of c1 but it changes in 10g R2 as 10g used HASH GROUP BY Operation to implement grouping,rather than using SORT GROUP BY as it would do in earlier versions. So Here if sorting is desired there must be explicit order by query.

Similary There can be some join queries in which users might be getting sorted result set , but they can not rely on it always, may be if execution plan changes it can not sort the result set, so if sorting is required, developers need explicit specify order by clause in query.

I remember a case in which a junior developer wrote a query to dump the table data to asciii csv file, Here was obviously clear columns data in csv need in same order as in table. But as Developer came to know about view user_tab_columns I told him, he used query on this view to estimate the maximum record length of table in csv file(rather than manully summing the all columns widths of table) what he could have done alternate way is set large linesize along with trimspool on, but he wamted to cut short work of typing select c1||','|| c2||','||c3||','||... from table. So he generated this select query from user_tab_columns. But he assumed columns orders would be same as in table name. Result was wrong columns order in csv file. So please don't assume - it was view- so not guaranteed.

 

I/O how much you have - mind it!

Rules of thumb are never advised by me but some can be taken as part of check list one by one while tunning I/O:

rule 1: I/O how much you have - mind it! so rule 1 is minimize I/O

rule 2: maximize cached I/O

rule 3: minimize I/O contention.

how to cut I/O:

1. cut unnecessary fetch. Be restrictive about columns in selected list. Make sure all columns fetched in explicit/implicit cursor are used some where in code. try take benefit of 'fast full index scan' .

2. check usefullness of indexed columns. They may be slowing DMLs and not yielding any query performance gain at all. So identify such indexes and drop.

3. avoid triggers which performs lot of queries/transactions and auditing from inside - these may actually be slowing DMLs especially when dmls in bulks are issued .

4. check all tables/indexes have appropriate values ser for PCTFREE and PCTUSED .

PCTFREE has default 10% so you may be wasting not only 10% extra disk/cache memory but also causing more I/O for objecting not undergoing future updates.

similarly setting PCTUSED quite higher means taking block more frequent on/off from free list.

trick: setting PCTFREE higher can reduce hot block contention.

5. If CPU resources are available some tables can be compressed. this will not ony minimize the I/O at the expense of CPU but also meets the objective "maximize cache" - how ? Because table now needs less buffers, you have more free buffers where other objects can be assigned. This is very useful in case when there is no shortage of CPU but scarcity of memory is.

Remember 10g has compress feature for only CTAS and insert into select queries. It is 11g with which comes OLTP table compression.


6. If using materialized views for replication or reporting then, try their refresh possible by FAST method. And if using FULL mechanism , think twice. What about rfresh by truncate/insert.


7. Optimize query execution plan. It is subjective topic in itself.

- tables are indexed appropriately and indexes have good selectivity. If index is not unique it will be good to have indexes with low clustering factor.

- check all tables have accurate statistics and statistics must have been gathered when tables had representative data.

-You need check instance optimizer parameters have been set correctly. If it is RAC instances they should have same values on all instances.

- Trade off of saving CPU versus good execution plan.

cursor_sharing =exact may be far better than cursor_sharing=similar or cursor_sharing=force
 
 
Maximize cached I/O

1. explore if you need configure KEEP and RECYCEL pools in your database for frequently accessed(small in size) and least accessed(bigger) tables and the set and size them appropriately. Assign the related objects to these pools.

2. set the buffer cache appropriately enough high to minimize physical reads.

3. if using bigger SGA > 16GB, in linux use huge pages memory.

4. set the PGA_AGGREGATE_TARGET appropriately. remember higher value for PGA can favour sort merge join over nested loop join.
.
.
.
 
Minimize I/O contention:
 
Balance the I/O across multiple disks array if possible.
take care of all I/O source redo logs, undo tablespace, temporay tablespaces , index tablespaces and DATA tablespaces and archive log too if DB is running in archivelog mode. SPREAD these across Disks, depending on their concurrent usage. you can check statspack/AWR report for I/O usage on tablespace/datafile wise.
.
.
.

Lyrics

Love Me, Love The Life I Lead Lyrics


(words & music by macauley - greenaway)
I am not a wise man neither am I a fool
But what I am the way the good lord made me
Though I need you more than you may ever understand
I cant wear a face that will betray me

Oh, if youre gonna love me, love the life I lead
Need the things I need, dont try to change me
If youre gonna take me, take me for what I am
I cant be another man, I cant be free
full lyrics

basics of connectivity - connect Oracle Database Server and easy connect from 10g

go to oracle installation directory,like C:\oracle\product\10.2.0\db_1\network\ADMIN\

check for listener.ora and tnsnames.ora file at DB server.
if there is IP/port change make sure to reflect same here.

if OS is windows : Net Manager tool can be run from:

windows > start > oracle entry in

to check what system level DB privilege have been granted:sqlplus /nolog
SQL>conn username/pwd
SQL> select * from session_privs;

using SQLPLUS login from sys or system user:

sqlplus /nolog
SQL> conn sys/pwd as sysdba
drop user username cascade;

grant connect,resource to username identified by pwd;

From 11g onward connect role has only the create session privilege.

then from cmd line:

imp username/pwd file=filename.dmp fromuser=scott touser=username log=impuser.log

Oracle easy connect feature from 10g onward:

sqlplus username/pwd@IP_ADDRESS_OF_DB_SERVER:PORT_NUMBER/SERVIVCE_NAME

sqlplus scott/tiger@192.168.90.100:1521/DEVDB


 

tracking DDL,while DB is in noarchivelog mode

You want to tack DDL and your DB is in noarchivelog , attempted as below,[no way but to use catalog in flat file]
SQL> conn / as as sysdba
Connected.
SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
Session altered.

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( -
> STARTTIME => '13-apr-2009 12:42:00', -
> ENDTIME => '15-apr-2009 11:55:00', -
> OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
> DBMS_LOGMNR.CONTINUOUS_MINE);
BEGIN DBMS_LOGMNR.START_LOGMNR( STARTTIME => '13-apr-2009 12:42:00', ENDTIME => '15-apr-2009 11:55:00', OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE); END;

*
ERROR at line 1:
ORA-01325: archive log mode must be enabled to build into the logstream


And again below you get error when utry use redo logs for building dict
SQL> EXECUTE DBMS_LOGMNR_D.BUILD ( options=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
BEGIN DBMS_LOGMNR_D.BUILD ( options=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS); END;
*
ERROR at line 1:
ORA-01325: archive log mode must be enabled to build into the logstream

if DB bouce is affordable go ahead as follow:
SQL> alter system set utl_file_dir='c:\ora';
SQL> shutdown immediate
SQL> startup

SQL> EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', 'c:\dict', OPTIONS => DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
PL/SQL procedure successfully completed.

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DDL_DICT_TRACKING);
BEGIN DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DDL_DICT_TRACKING); END;
*
ERROR at line 1:
ORA-01292: no log file has been specified for the current LogMiner sessionORA-06512: at "SYS.DBMS_LOGMNR", line 53
ORA-06512: at line 1

seems in much hurry!!

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'C:\ORACLE\ORADATA\CORPENH\REDO01.LOG', OPTIONS => DBMS_LOGMNR.NEW);

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'C:\ORACLE\ORADATA\CORPENH\REDO02.LOG', OPTIONS => DBMS_LOGMNR.NEW);

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'C:\ORACLE\ORADATA\CORPENH\REDO03.LOG', OPTIONS => DBMS_LOGMNR.NEW);

[file 3 was current log group file]

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DDL_DICT_TRACKING);

SQL> select count(*) from v$logmnr_contents;
COUNT(*)
----------
48050

SQL> create table t1 as select * from v$logmnr_contents;
Table created.

SQL> create table t2(c1 number);
Table created.

SQL> truncate table t2;
Table truncated.

SQL> create table t1_log as select * from v$logmnr_contents;
Table created.

query from, another session SELECT t.session_info,t.sql_redo, t.* FROM t1_log t WHERE UPPER(sql_redo) LIKE UPPER('%truncate%') OR operation LIKE 'DDL'
gives: DDL operations above in SQL>prompt are also tracked

tracking down resource intensive queries - made easy 10g

from 10g u dont need continuously query v$sesion_wait to track wait events and log in the data to table to aggregate them: follows ASH


top wait events and total time waited :

select active_session_history.event,
sum(active_session_history.wait_time +
active_session_history.time_waited) ttl_wait_time
from v$active_session_history active_session_history
where active_session_history.sample_time between sysdate - 60/2880 and sysdate
group by active_session_history.event
order by 2

top sessions id,username and total time waited :

select sesion.sid,
sesion.username,
sum(active_session_history.wait_time +
active_session_history.time_waited) ttl_wait_time
from v$active_session_history active_session_history,
v$session sesion
where active_session_history.sample_time between sysdate - 60/2880 and sysdate
and active_session_history.session_id = sesion.sid
group by sesion.sid, sesion.username
order by 3



top user id/name,sql_text and total time waited:

SELECT active_session_history.user_id,
dba_users.username,
sqlarea.sql_text,
SUM(active_session_history.wait_time +
active_session_history.time_waited) ttl_wait_time
FROM v$active_session_history active_session_history,
v$sqlarea sqlarea,
dba_users
WHERE active_session_history.sample_time BETWEEN SYSDATE - 1 AND SYSDATE
AND active_session_history.sql_id = sqlarea.sql_id
AND active_session_history.user_id = dba_users.user_id
GROUP BY active_session_history.user_id,sqlarea.sql_text, dba_users.username
ORDER BY 4 DESC

top segments name/type ,events and total wait on them:


SELECT dba_objects.object_name,
dba_objects.object_type,
active_session_history.event,
SUM(active_session_history.wait_time +
active_session_history.time_waited) ttl_wait_time
FROM v$active_session_history active_session_history,
dba_objects
WHERE active_session_history.sample_time BETWEEN SYSDATE - 1 AND SYSDATE
AND active_session_history.current_obj# = dba_objects.object_id
GROUP BY dba_objects.object_name, dba_objects.object_type, active_session_history.event
ORDER BY 4 DESC

top 10 completed queries:

SELECT *
FROM
(SELECT sql_text,
sql_id,
elapsed_time,
cpu_time,
user_io_wait_time,
elapsed_Time/executions,executions
FROM sys.v_$sqlarea
WHERE executions>0
ORDER BY 6 DESC)
WHERE ROWNUM < 11


executions=0 means query are currently in progress

extracting execution plan for library cache:


SELECT LPAD(' ', 2*(LEVEL-1))||operation||' '||
DECODE(ID, 0, 'Cost = '||position) "OPERATION",
options, object_name
FROM v$sql_plan
START WITH (sql_id='&sql_idof_query'
AND child_number = 0
AND ID=0 )
CONNECT BY PRIOR ID = parent_id
AND PRIOR address = address
AND PRIOR hash_value = hash_value
AND PRIOR child_number = child_number
ORDER BY ID, position

query taken: merge join

SELECT st.sql_id,operation,options,object_name,
FROM v$sqltext st , v$sql_plan sp
WHERE st.sql_id=sp.sql_id AND
operation LIKE '%MERGE JOIN%' AND options IS NULL
ORDER BY st.sql_id,piece


query to find sqls which have text 'INDX_IND1':

SELECT /*+ hola */ * FROM v$sqlarea WHERE sql_text LIKE '%INDX_IND1%' AND sql_text NOT LIKE '%hola%'


Parallel Query in Oracle !! Beware!!

Run query in parallel only if you have idle CPUs and Table is distributed across disk but again be careful to choose degree of parallelism. Over Parallelization can kill performance

just imagine of 8 CPUs and parallel_threads_per_cpu parameter default value 2

you fired a query with degree 4 but its taking cpu_counts*parallel_threads_per_cpu =16! You can hit few moere such a query and see PX idle wait evens and helplessly watch terrific slow down!!

query session wait details for parallel query(PX wait events)-

SELECT * FROM v$Session_wait
WHERE sid IN ( SELECT sid FROM v$session
WHERE
username='TEST_USER' AND
status='ACTIVE'
)
AND wait_time=0
AND sid IN ( SELECT sid FROM v$px_session)
ORDER BY 1


351 1699 direct path read file number 5 0000000000000005 first dba 2588307 0000000000277E93 block cnt 126 000000000000007E 1740759767 8 User I/O 0 0 WAITING
416 1498 direct path read file number 5 0000000000000005 first dba 2573843 0000000000274613 block cnt 126 000000000000007E 1740759767 8 User I/O 0 0 WAITING
419 26 PX Deq: Execution Msg sleeptime/senderid 268566527 000000001001FFFF passes 26 000000000000001A 0 00 2723168908 6 Idle 0 3 WAITING
445 26 PX Deq: Execution Msg sleeptime/senderid 268566527 000000001001FFFF passes 26 000000000000001A 0 00 2723168908 6 Idle 0 3 WAITING
575 26 PX Deq: Execution Msg sleeptime/senderid 268566527 000000001001FFFF passes 26 000000000000001A 0 00 2723168908 6 Idle 0 3 WAITING
618 996 PX Deq: Execute Reply sleeptime/senderid 200 00000000000000C8 passes 1 0000000000000001 0 00 2723168908 6 Idle 0 21 WAITING
684 26 PX Deq: Execution Msg sleeptime/senderid 268566527 000000001001FFFF passes 26 000000000000001A 0 00 2723168908 6 Idle 0 3 WAITING
693 1616 direct path read file number 5 0000000000000005 first dba 2566803 0000000000272A93 block cnt 126 000000000000007E 1740759767 8 User I/O 0 0 WAITING
702 1329 direct path read file number 5 0000000000000005 first dba 2599955 000000000027AC13 block cnt 126 000000000000007E 1740759767 8 User I/O 0 0 WAITING

------------

345 194 PX Deq Credit: send blkd sleeptime/senderid 268566527 000000001001FFFF passes 175 00000000000000AF qref 0 00 1893977003 0 Other 0 0 WAITING
356 1248 PX Deq: Execution Msg sleeptime/senderid 268566527 000000001001FFFF passes 1244 00000000000004DC 0 00 2723168908 6 Idle 0 0 WAITING

So better you diable parallelization of tables by statement alter table noparallel