Display Oracle SQL Developer XMLTYPE

Do you see annoying (XMLTYPE) in Oracle SQL Developer output?
Wrap the column in the xmlType.getClobVal function to see the actual data.

SELECT q.col1
     , XMLTYPE.GETCLOBVAL ( q.col1 )
  FROM q ;


Posted in XML

TKProf by ASH

Oracle session trace + tkprof is still the best tool for performance diagnosis. But:

  • Need access to database server user_dump_dest directory.
  • Has to be switched on in advance
  • Has a performance impact, especially if tracing multiple sessions or tracing for a long time.

And ASH is a good enough alternative much of the time.

The below scripts mimic tkprof output using only ASH data.

Read more ›

Posted in Performance tuning, Scripts

What to do if a database performance issue is reported

A high level overview.

Expectation should be for all OLTP operations to take less than 1 second, and all batch processing jobs and background scheduled report jobs to take less than 1 hour each. This is deliverable on even large databases with moderate hardware, with some effort put in.

1) Is the issue even a performance issue at all?
Optionally quickly confirm this is a real performance issue rather than a general system issue being misreported as a performance issue. Commonly three non database-performance issues get reported this way:

1.1) Is there a system wide loss-of-service or near-loss-of-service due to server or database down, filesystem not mounted, space at 100%, rogue process consuming 100% cpu, etc.?

1.2) Are sessions waiting for a row lock to be released by another session?
In Oracle, $ORACLE_HOME/rdbms/admin/utllockt.sql or gv$sesion or Enterprise Manager GUI.
In Microsoft SQL Server right click instance in Management Studio > Activity Monitor.

1.3) Are application servers rather than database servers the bottleneck?
Quickly check one or more of the application servers cpu with:
In Windows > Right click bottom bar > Start Task Manager > Performance tab.
In Unix, top or topas or sar 5 5 etc.
Typically extra application servers can quickly and cheaply be cloned up in a ‘web garden’ architecture, or at least extra cpu resource can be allocated to the existing servers.
Requiring a lot of application servers is an indication that more application processing should be moved to within database, although that is normally more of a long term improvement aspiration.
Unlikely nowadays, but users PC could also be a bottleneck for client-server applications.

Now onto Database Tuning:

2) Identify the session or process consuming the most resources, if possible limiting your search to the sessions related to the particular application or user who has the performance problem.
In Oracle, query gv$session or look at top sessions in Oracle Enterprise Manager or Oracle SQL Developer.
In Microsoft SQL Server, use SQL Server Profiler or right click instance in Management Studio > Activity Monitor.

3) Obtain the SQL statements which that session recently ran. Focus first on the top SQL statement from that set.
In Oracle, use:
3.1) dbms_monitor.session_trace_enable + tkprof (best by far), or
3.2) $ORACLE_HOME/rdbms/admin/awrrpt.sql (if diagnostic pack licensed) (for RAC, awrgrpt.sql), or
3.3) $ORACLE_HOME/rdbms/admin/spreport.sql (if diagnostic pack not licensed), or
3.4) AWR performance view in Oracle Enterprise Manager (if diagnostic pack licensed).
In Microsoft SQL Server, use SQL Server Profiler.
Be careful here not to be drawn into wasting time looking at an irrelevant session or at sql from one of the irrelevant sessions.

4) Indexes – this is normally where the performance fix is at.
Find which indexes make that statement run faster.
In Oracle (if you have diagnostic and tuning pack licensed), $ORACLE_HOME/rdbms/admin/sqltrpt.sql or Enterprise Manager
In Microsoft SQL Server, use Database Engine Tuning Advisor – but only on a test server or out of hours, is itself very resource intensive.
The recommended SQL Profiles (Oracle) and create statistics (Microsoft SQL Server) tend to be spurious, but the index recommendations are better – at least as a starting point, will often recommend more indexes or more columns in an index than are actually necessary.
Indexes can also be created individually with execution plan used to see which indexes are beneficial.
In Oracle that is available in Oracle SQL Developer or with dbms_xplan.display_plan or set autotrace.
In Microsoft SQL Server that is available in query window of Management Studio.
In MySQL, prefix your SQL statement with the EXPLAIN keyword.
Function based indexes such as on UPPER(surname) or TRUNC(datetime) can be essential.
In Oracle, indexes can be switched off and on with visible/invisible to help speed up testing different scenarios.
Don’t be afraid to add indexes, the extra overhead on inserts is negligible compared to the savings on selects.

That may be enough to fix the problem, if not then the next steps are (unfortunately here things can start to be time consuming):

5) SQL Code. Can it be rewritten to be more efficient?
Look for slow-by-slow loop processing, unnecessary table joins (often happens with views, and views-of-views), unnecessary outer joins, old left over hints and sql profiles that are now inefficient.
Try to move the bulk of the main processing into a single SQL statement at the start, rather than issuing large numbers of separate SQL commands within a loop or repeated function calls.

6) Are you suffering Plan Instability?
A real curse for Oracle.
Means SQL runs fine some days, dead slow on other days.
Fixes include removing histograms on optimizer stats, setting cursor_sharing parameter to (default) value of exact, freezing optimizer statistics in place, using SQL Plan Management Baselines.
Plan instability is not (afaik) as much of an issue with Microsoft SQL Server, but freezing optimizer statistics in place may still be needed there.

7) Batch processing schedules.
Can they be moved to start earlier, or run immediately after each other in sequence with no gaps of wasted time, or set to run simultaneously rather than one at a time?
Adding a status or last_modified timestamp column to main table, with an index on that column, can greatly reduce the time taken for the initial select of records to be processed.

8) Reporting specific issues.
Consider moving slow reports to a dedicated reporting database away from the live OLTP database.
You can make queries automatically rewrite to use fast small summary objects containing the results the user wants. To do that, create:
Oracle – materialized views.
Microsoft SQL Server – indexed views.
It is also possible to manually build summary tables and point reports at those, although likely involves more work than the above automatic method.
Reporting data may need far more indexes than OLTP data.
Bitmap indexes and table and index compression may be useful in a reporting database.

9) Are database or server parameters wrongly configured?
Very unlikely in this day and age to find these set wrongly, used to be common back in the day (1990s) so still gets mentioned in old tuning books and articles.
Two that are still sometimes wrongly configured today are:
9.1) Oracle on linux without hugepages configured (the default is not to have hugepages configured).
9.2) Oracle on unix, non-ASM, with filesystems mounted using (the default) filesystem OS buffer caching instead of direct IO.
Memory assigned to the database should be generous, but that is now anyway done as default with the Oracle DBCA and Microsoft SQL Server install GUIs.
In MySQL this is innodb_buffer_pool_size=Xg in file my.cnf.
Don’t be tempted to skimp on pga_aggregate_target for Oracle on linux, especially for reporting and batch applications.
Physical memory chips are very cheap now, but cpu is still expensive, especially so for database servers (as opposed to application servers) because the database license likely depends on the cpu count. Database servers are anyway not normally cpu bound.


1) It is very easy to be distracted in performance investigations, notice something which is maybe not quite right but is irrelevant for the issue at hand, wasting a lot of time investigating and fixing the irrelevant issue.
Especially easy if you know how to fix the irrelevant issue yet don’t know how to fix the real issue – human nature is to put off difficult tasks.
Make sure to focus on what is causing the problem, ignore everything else (or maybe take a note to go back and look at those other things in the future when you have spare time).

2) Higher spec hardware and/or newer versions of OS or database software is always a worthwhile thing to have, but most often makes no difference to a performance issue, beware of recommending that as a performance fix.

3) Performance testing is essential before releasing new applications or new versions to live, and before upgrading database versions.
Batch jobs are easy to run against a test database.
For OLTP browser interface applications, make a recording in Apache JMeter (free download) or a commercial equivalent like HP LoadRunner and replay that against the new environment before go live.

Further reading:

Posted in Performance tuning

Rman list most recent backups

List most recent rman database backups on target database sqlplus with:

set pages 9999 lines 132
ALTER SESSION SET nls_date_format = 'Dy DD-Mon-YYYY HH24:MI:SS'
SELECT b.checkpoint_time , b.incremental_level , COUNT(*)
  FROM v$backup_datafile b
  JOIN v$datafile f
    ON f.file# = b.file#
  JOIN v$tablespace ts
    ON f.ts# = ts.ts#
 WHERE ts.included_in_database_backup = 'YES'
   AND b.checkpoint_time > SYSDATE - 10
 GROUP BY b.checkpoint_time , b.incremental_level

H/t Lisa T.

Posted in Rman

Oracle hint cursor_sharing_exact using sql patch

Bind variable peeking is still a cause of plan instability, even after 11gR1’s introduction of adaptive cursor sharing. In my case adaptive cursor sharing wasn’t helpful for a daily batch job, because the sql was aged out of cache by the time the next scheduled run came around. Making everything worse was having parameter cursor_sharing set to the non default value of ‘force’ at 3rd party vendors specification, meaning literal values useful for the optimizer were being hidden by :SYS_B0 generated bind variables.

The fix for me was to add a /*+ cursor_sharing_exact */ hint to the SQL statement, and being unable to directly edit the SQL code I used a SQL Patch to do that:

   l_sql_text CLOB ;
   SELECT sql_fulltext INTO l_sql_text FROM v$sqlarea WHERE ROWNUM < 2 AND sql_id = '82wxzdhxw224m' ;
   SYS.DBMS_SQLDIAG_INTERNAL.i_create_patch (
        sql_text => l_sql_text
      , hint_text => 'CURSOR_SHARING_EXACT'
      , name => 'Ainapps cursor_sharing_exact'  -- 30 characters max
      , description => 'Ainapps cursor_sharing_exact hint 09-Nov-2015 Andrew Fraser'  -- 500 characters max
   ) ;

The SQL patch is then visible in view dba_sql_patches.

The last line of plan output will show:

   - SQL patch "Ainapps cursor_sharing_exact" used for this statement

Because there may now be child cursors both with and without that SQL Patch in cache, specify cursor_child_no => NULL to see all child cursors listed in plan output:

SELECT * FROM TABLE ( DBMS_XPLAN.display_cursor ( sql_id => '82wxzdhxw224m' , cursor_child_no => NULL ) ) ;

From version 12c, the optional force_match => TRUE parameter can be passed to i_create_patch to apply for different literal values.

If this is happening a lot, removing histograms may be a better option than hinting/patching individual statements.

Posted in Performance tuning

Oracle select earliest and latest rows using keep group or first_value analytics

There are two ways of selecting the earliest and latest rows for each customer.

1) Keep group:

SELECT t1.customer_ref
     , MAX ( t1.order_amount ) KEEP ( DENSE_RANK FIRST ORDER BY t1.order_date ASC  ) AS earliest_order_amount
     , MAX ( t1.order_amount ) KEEP ( DENSE_RANK FIRST ORDER BY t1.order_date DESC ) AS latest_order_amount
  FROM t1
 WHERE t1.order_amount IS NOT NULL
 GROUP BY t1.customer_ref

Max can be replaced with min or avg if required – the choice is anyway irrelevant unless two orders for the same customer have identical order_dates.

2) First_value analytics:

SELECT DISTINCT t1.customer_ref
     , FIRST_VALUE ( t1.order_amount ) IGNORE NULLS OVER ( PARTITION BY t1.customer_ref ORDER BY t1.order_date ASC  , t1.order_amount DESC ) AS earliest_order_amount
     , FIRST_VALUE ( t1.order_amount ) IGNORE NULLS OVER ( PARTITION BY t1.customer_ref ORDER BY t1.order_date DESC , t1.order_amount DESC ) AS latest_order_amount
  FROM t1
 WHERE t1.order_amount IS NOT NULL

Again the ‘order_amount desc’ ordering is irrelevant unless two orders for the same customer have identical order_dates.

Sample data:

CREATE TABLE t1 ( customer_ref VARCHAR2(10) , order_amount NUMBER , order_date DATE ) ;
INSERT INTO t1 ( customer_ref , order_amount , order_date ) VALUES ( '001' , 100 , SYSDATE - 10 ) ;
INSERT INTO t1 ( customer_ref , order_amount , order_date ) VALUES ( '001' , 200 , SYSDATE - 6 ) ;
INSERT INTO t1 ( customer_ref , order_amount , order_date ) VALUES ( '001' , 300 , SYSDATE - 4 ) ;
INSERT INTO t1 ( customer_ref , order_amount , order_date ) VALUES ( '001' , NULL , SYSDATE - 1 ) ;
INSERT INTO t1 ( customer_ref , order_amount , order_date ) VALUES ( '002' , 100 , SYSDATE - 10 ) ;
INSERT INTO t1 ( customer_ref , order_amount , order_date ) VALUES ( '002' , 200 , SYSDATE - 6 ) ;
INSERT INTO t1 ( customer_ref , order_amount , order_date ) VALUES ( '002' , 300 , TRUNC ( SYSDATE - 4 ) ) ;
INSERT INTO t1 ( customer_ref , order_amount , order_date ) VALUES ( '002' , 250 , TRUNC ( SYSDATE - 4 ) ) ;
Posted in Uncategorized

Windows PowerShell run script on all SQL Servers recovery mode

Micrsoft SQL Server databases should be kept in full recovery mode (the equivalent of Oracle’s archivelog mode) except for the master, tempdb, ReportServerTempDB databases.

To check all databases in a SQL Server instance, run the below script in a SQL command window. Optionally uncomment the EXEC line to have this script switch databases to full recovery mode.

use [master]

set nocount on

DECLARE @name sysname
      , @sqlcmd VARCHAR(MAX)

 SELECT d.name
   FROM sys.databases d
  WHERE d.recovery_model = 3 -- simple recovery 
    AND d.name != 'master' 
    AND d.name NOT LIKE '%tempdb'  -- tempdb , ReportServerTempDB, etc.

OPEN db_cur

WHILE @@fetch_status = 0
    SET @sqlcmd = 'alter database ' + QUOTENAME ( @name ) + ' set recovery full with no_wait ; '
    --EXEC ( @sqlcmd )
    PRINT @sqlcmd
    FETCH NEXT FROM db_cur INTO @name

CLOSE db_cur

Alternatively sp_MSforeachdb could do the same with fewer lines of code, except sp_MSforeachdb is undocumented and buggy.

Using Windows Powershell it is possible to automate that same script to run against many SQL Servers.

(1) First create a text file named e.g. C:\ServerInstances.txt with one line for each SQL Server instance, case insensitive. The port and instance name have to be specified if they are not the default values (1433 and default)


(2) Then copy the below Windows PowerShell script to a file named e.g. ServerInstances.ps1

Import-Module sqlps -DisableNameChecking

foreach ( $ServerInstance in get-content C:\ServerInstances.txt )
Write-Host "Working on $ServerInstance ..."

Invoke-Sqlcmd -Server "$ServerInstance" -Database master -Verbose -Query "
set nocount on

DECLARE @name sysname
      , @sqlcmd VARCHAR(MAX)

 SELECT d.name
   FROM sys.databases d
  WHERE d.recovery_model = 3 -- simple recovery 
    AND d.name != 'master' 
    AND d.name NOT LIKE '%tempdb'  -- tempdb , ReportServerTempDB, etc.

OPEN db_cur

WHILE @@fetch_status = 0
    SET @sqlcmd = 'alter database ' + QUOTENAME ( @name ) + ' set recovery full with no_wait ; '
    --EXEC ( @sqlcmd )
    PRINT @sqlcmd
    FETCH NEXT FROM db_cur INTO @name

CLOSE db_cur

Again comment out the EXEC line to have the script change the recovery mode.

(3) Run that script in a Windows PowerShell command window.

PS> .\ServerInstances.ps1

or if you get error “File .\ServerInstances.ps1 cannot be loaded because the execution of scripts is disabled on this system.”, then:

PS> powershell.exe -executionpolicy bypass -file .\ServerInstances.ps1

Again comment out the EXEC line if you want the change to be applied rather than just reported on.

That uses AD authentication by default, SQL Server authentication could be used instead by adding the -Username and -Password options.

If setting databases into full recovery mode, remember to make sure transaction log backups are scheduled, otherwise the transaction logs will grow without limit.

Other T-SQL scripts you need can be plugged in in place of of recovery mode checks. If doing this a lot it may be worth setting up and using a Central Management Server instead of Windows PowerShell.

H/t Moshin and dmc.

Posted in SQL server, Uncategorized

Oracle Full Text simple example for like % grep wildcard

The equivalent of the grep command in oracle is like:

SELECT * FROM dba_source s WHERE LOWER ( s.text ) LIKE '%within group%' ;

That works fast enough for querying dba_source, which is a small dataset. But for even medium sized tables, you really want a fast indexed query, which the “LIKE ‘%” syntax ordinarily rules out.

Oracle’s fast indexed equivalent of grep is catsearch:

SELECT * FROM dba_source s WHERE CATSEARCH ( s.text , 'within group' , NULL ) > 0 ;

That last command will fail with:

ORA-20000: Oracle Text error: DRG-10599: column is not indexed

until a full text index is created on the column:

SELECT * FROM t1 WHERE CATSEARCH ( t1.text , 'within group' , NULL ) > 0 ;

Once that index is created, the catsearch query runs without error, and runs very fast.

Catsearch is case insensitive, but it does behave differently from “LIKE ‘%term%'” in that it searches for occurrences of words rather than text strings. A catsearch of ‘within group’ is the same as “grep -i within | grep -i group”, rather than “grep -i ‘within group'”. Word boundaries are enforced with catgroup, so the below catsearch returns only rows 1-3, misses out row 4 even though that would be caught by “LIKE ‘%smith%'” type conditions:

INSERT INTO t1 ( text ) VALUES ( '1 Customer Smith info' ) ;
INSERT INTO t1 ( text ) VALUES ( '2 Customer smith info' ) ;
INSERT INTO t1 ( text ) VALUES ( '3 Customer-Smith-info' ) ;
INSERT INTO t1 ( text ) VALUES ( '4 CustomerSmithInfo' ) ;
SELECT * FROM t1 WHERE CATSEARCH ( t1.text , 'smith' , NULL ) > 0 

For most applications, that word boundary behaviour would be acceptable or even desirable.

The ctxsys.ctxcat index is fully transaction aware by default (unlike some other full text index types), even detecting changes in current session prior to commit.

The create index statement above at the same time, under the covers, creates an associated table and trigger. So the index owner needs (at least temporarily) the create table and create trigger privileges, either granted directly or via a role.

Without those two privileges the create index statement will fail with:

ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
ORA-20000: Oracle Text error:
DRG-50857: oracle error in drvxtabc.create_indexes:TC
ORA-01031: insufficient privileges
ORA-06512: at "CTXSYS.DRUE", line 160

The index does take up significantly more space than a standard b-tree index, but likely a justifiable use of space if you have leading wildcard “LIKE ‘%string%'” type queries.

Posted in Performance tuning

Oracle top session info in RAC with tracefile name

See real time top session info in either RAC or standalone database, including tracefile name qualified by instance name, with:

-- ALTER SESSION SET nls_date_format = 'Dy DD-Mon-YYYY HH24:MI:SS' ;
SELECT DISTINCT s.inst_id , s.sid , s.serial# , s.username , s.osuser , s.program || ' ' || s.module AS program
     , s.status , ROUND ( s.last_call_et / 60 ) AS minutes , s.logon_time
     , s.state , s.event , s.seconds_in_wait , s.sql_id , q.sql_text
     , p.spid , s.service_name
     , CASE s.sql_trace WHEN 'ENABLED' THEN s.inst_id || ':' || p.tracefile END AS tracefile
  FROM gv$session s
  JOIN gv$sql q
    ON q.inst_id = s.inst_id
   AND q.sql_id = s.sql_id
  JOIN gv$process p
    ON p.inst_id = s.inst_id
   AND p.addr = s.paddr
 WHERE ( s.status = 'ACTIVE' OR last_call_et < 60 )  -- active in last minute
   AND s.username != 'SYS'
 ORDER BY s.inst_id , s.sid ;
Posted in Performance tuning, Uncategorized

10046 trace a module using logon trigger

1) Update – thanks to Joaquin Gonzalez for pointing out a preferable safer method than logon trigger for 10046 tracing a particular module:

EXEC DBMS_MONITOR.serv_mod_act_stat_enable ( service_name => 'myservice' , module_name => 'myModule.exe' , binds => TRUE )

You can verify service_name is what you expect by checking the column of the same name in gv$sql.

To switch that off once you are done:

EXEC DBMS_MONITOR.serv_mod_act_stat_disable ( service_name => 'myservice' , module_name => 'myModule.exe' )

2) To generate 10046 traces just for a particular SQL statement, regardless of when it runs or who runs it:

ALTER SYSTEM SET EVENTS 'sql_trace [sql: sql_id=cnju0jr4y2ujs] bind=true , wait=true' ;

And to switch that off:

ALTER SYSTEM SET EVENTS 'sql_trace off' ;

H/t Julian Dyke.

3) The less preferable option of using a logon trigger to 10046 trace a particular module is:

WHEN ( USER != 'SYS' )
   IF SYS_CONTEXT ( 'userenv' , 'module' ) LIKE 'myModule.exe%'
      DBMS_MONITOR.session_trace_enable ( binds => TRUE ) ;
   END IF ;
END set_trace ;

For versions 10gR2 and below, replace the dbms_monitor line above with the older style syntax:

EXECUTE IMMEDIATE 'alter session set events ''10046 trace name context forever, level 12''' ;

Drop the logon trigger once you have collected the information you need, session tracing does add an overhead, especially when recording bind values as above.

Posted in Performance tuning