Standard decimal > hexadecimal is done with:
select to_char ( 10 , 'x' ) from dual ;
But I wanted a kind of extended hexadecimal that would keep single letters going up beyond 15/f, so would return ‘g ‘for decimal 16 instead of returning hexadecimal ’10′.
Why did I want that? I needed to squeeze a 2 digit number column into a 1 character column in an extract transform load operation. Most of the source data was less than 16, but there were a few values higher than that, and a simple decimal > hexadecimal translation wouldn’t work for those.
The ascii value for ‘a’ is 97, so using the chr function to turn numbers into their ascii character equivalents gives:
WHEN numcol BETWEEN 0 AND 9 THEN TO_CHAR ( numcol )
WHEN numcol BETWEEN 10 AND 35 THEN CHR ( 87 + numcol )
END AS extended_hexadecimal
FROM mytable ;
Preferred method to stop and start oracle listener in RAC enivronment:
srvctl config listener -n nodename
srvctl stop listener -n nodename
srvctl start listener -n nodename
You can denormalise data using the listagg function, so to display users with their roles on a single line comma separated:
, LISTAGG ( granted_role , ' , ' ) WITHIN GROUP ( ORDER BY granted_role ) AS granted_roles
GROUP BY grantee
ORDER BY 1 ;
H/t Adrian Billington. Continue reading
De-duplicating parent – child table sets of data here using the listagg function to merge all the child rows into a single denormalised column.
set serverout on
FOR d1 IN (
WITH denorm AS (
, LISTAGG ( child.aos_code || child.stud_pass ) WITHIN GROUP ( ORDER BY child.aos_code || child.stud_pass ) AS entry_value
FROM StcAPreList parent
JOIN StcAPreGroupAos child
ON parent.pre_group_id = child.pre_group_id
GROUP BY parent.aos_code , parent.pre_group_id
FROM denorm a
WHERE a.pre_group_id > (
SELECT MIN ( b.pre_group_id )
FROM denorm b
WHERE a.aos_code = b.aos_code
AND a.entry_value = b.entry_value )
ORDER BY 1
dbms_output.put_line ( 'deleting ' || d1.pre_group_id ) ;
DELETE FROM StcAPreGroupAos child WHERE child.pre_group_id = d1.pre_group_id ;
DELETE FROM StcAPreList parent WHERE parent.pre_group_id = d1.pre_group_id ;
END LOOP ;
The below code will generate XML format output, just plug in any SQL and it will work without further development.
This is based on three articles by other authors:
- Tim Hall – but Tim’s code has a bug that stops it outputting after the first 32k of data.
- Tom Kyte – the fix for that bug.
- Steve Karam – code to nest XML data.
Performance is kept optimal by using set based SQL statements without cursor loops, and by using utl_file.put to write in 32k chunks rather than line-by-line of dbms_output.put_line or utl_file.put_line.
Before generating XML, ask if this is the best approach for transferring the data. Database to database copying is better done across database link, if possible.
Code: Continue reading
Maintaining large numbers of client tnsnames.ora files is a pain. Oracle Internet Directory is the recommended solution to this, but it is not an easy exercise to set up or migrate to.
An easier alternative is to set TNS_ADMIN in all client PCs to use a single common tnsnames.ora file
All the client PCs need read access to that file, but don’t need write access.
One drawback of doing that is that if the network share is unavailable for any reason then all client connections will fail, so a highly resilient network share would be needed. A safer alternative is to set TNS_ADMIN to the same local disk folder on each PC
and rely on automated PC management tools like Microsoft System Centre to deploy the master tnsnames file out to each client PC at regular intervals.
The precedence of TNS_ADMIN in Windows is supposedly (I’ve not yet tested this):
- In current working directory
- TNS_ADMIN defined by the session (set tns_admin= in batch/command prompt)
- TNS_ADMIN defined as a global environment variable (control panel > system > advanced system settings > advanced > environment variables)
- TNS_ADMIN defined in registry (run “regedit” and look under HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\…)
- Default location (network\admin directory for that particular Oracle Home)
The best utilities to use when copying/mirroring/backing up files are:
- robocopy – Windows
- rsync – Linux
- lftp – Linux without setting up password-less ssh
Scripts can supply a password to sftp using lftp like this:
lftp -u user:password sftp://server.domain.com << END_LFTP
You can set up password-less ssh this way.
SQL Server transaction logs need backed up regularly, or they will grow and grow forever, eventually filling up all available disk space. That is best done in a maintenance plan. Unfortunately you have to remember to set that up post install, it doesn’t happen out of the box by default (why doesn’t it? Surely it would be easy for Microsoft to create a standard maintenance plan at install time?).
Oracle does’t have this problem, its redo logs are a fixed size and are automatically archived off by default
If you have a SQL Server that has been left unattended like this for a long time, you may want to shrink down the over-large transaction log files in addition to setting up a maintenance plan – although that can cause fragmentation so you might also choose to leave it be.
To shrink use commands like these:
Remote Desktop Connection Manager, a free download from http://www.microsoft.com/en-gb/download/details.aspx?id=21101, lets you switch quickly between windows servers, saves the servers and your login details in a list.
If data has been removed due to user error, it can be easily and quickly be brought back without the need for restores, using flashback query:
CREATE TABLE temp_rescue AS SELECT * FROM owner.table AS OF TIMESTAMP TO_TIMESTAMP ( 'Thu 14-Nov-2013 13:00:00' , 'Dy DD-Mon-YYYY HH24:MI:SS' ) ;
Note that this is for flashback query, which is (confusingly) unrelated to flashback database, flashback area, or db_flashback_retention_target. Instead flashback query is related to undo_retention. Flashback query is always on and available, but the unrelated flashback database is switched off by default (see with ‘select flashback_on from v$database’).
By default undo_retention is set to 900s = 15 minutes: not very long, but that is a target rather than a definite value, in practice can often flashback query to much longer back. If you do want longer flashback query times, you can set up a flashback data archive, but that is not in place by default, has to be configured by a DBA beforehand.
You can use flashback query with data dictionary objects like all_source, useful if pl/sql code has been changed by mistake.