Dynamics AX no longer supports the Oracle database and so pre-AX 2012 applications still on Oracle need to be migrated to SQL Server before they can be upgraded. Here’s how to overcome the risk and fear associated with Dynamics AX Data Migration. Continue reading “Dynamics AX 2009 Oracle to SQL Server Migration”
The IO Error: Got minus one from a read call error is caused by the SQLNET.ora file on the database server being configured to only allow connections from certain computers on the network (as identified by their IP addresses). Continue reading “SQL Developer – IO Error: Got minus one from a read call”
Oracle very kindly offered to run through a Proof of Concept installation of their GoldenGate replication product with us, and seeing as how we’re not prone to looking gift horses in their mouths, we’ve accepted!
Quiet Friday’s are a nice treat but something too often squandered. This past Friday I found myself left to my own devices as the rest of the DBA team were working from another location. This left me free to get a few things about running our environment straight in my own mind with the most notable being was how to use CommVault for database backup and recovery beyond simply copying RMAN output to tape.
I have access to a test server for playing about with such things so I created a test database with the sample schemas to use for my backup and recovery testing. This also presented an ideal opportunity to look at the capabilities of OEM in and around monitoring databases while something bad happens to them (ahead of my recovery tests) so I first needed to install the OEM agent and the relevant options (host, databases, listener, ASM). The OEM agent went in without too much fuss (just a few things needed editing regarding SUDO and the DBSNMP database user account) and OEM got down to its monitoring duties quickly. This was a little unexpected as yesterday I had an issue with OEM getting stuck in “status pending” for one of our database instances (in that case the EMCTL command CLEARSTATE fixed it) so I was half expecting to hit a problem today. Our OEM system is configured with groups like “Database Instance” set up to send monitoring emails to the DBA team so the addition of my test system into the appropriate groups did trigger email notifications as I brought the database up and down as part of my CommVault tests.
On the CommVault front, the agent went onto the server also with surprising ease once you know to how to properly configure access permissions for the agent on Linux systems (I’ve seen this trip up administrators in the past so I knew that there were additional steps to be performed on the host beyond just deploying the agent). Once the database had the correct media policy applied it was a simple matter to run a backup. The hard part about CommVault is remembering that, with regards to Oracle, it is essentially running an RMAN backup, so dealing with database issues like dropped objects and schemas (which is what I was testing) need to be dealt with via a Point in Time recovery in order to be successful. Also, whenever CommVault throws an error, it’s best to analyse the associated RMAN log for the cause as the CommVault GUI (version 9 anyway) doesn’t reveal too much in the line of detail.
Last Friday thankfully wasn’t squandered and my initial foray into deploying OEM and CommVault now opens the door to some further adventures; I am particularly interested in deploying both of those systems to SQL Server (especially OEM!).
// Data Guard // Database Incarnations // Standby Logs applying in Alert Log but not in V$ARCHIVED_LOG
Over the past week I’ve been getting a great introduction to the practical workings of Data Guard. In the past I’ve worked a lot with Disaster Recovery systems built using Oracle standard edition and therefore not licensed to use Data Guard and in those circumstances a poor man’s version of the system was put in place by using RSYNC to synchronise archive logs between a production database and a DR database. With the logs in place, a script run on a schedule would recover the data from the logs. It turns out that the concept of Data Guard is pretty similar in that it’s basically about getting archive logs to the right place and setting the destination database into a mode where it can read those logs and be ready for when disaster strikes.
Using Data Guard in a production setting makes for interesting times as you get into the hows and whys of setting up the Data Guard source and destinations and then on into considerations like physical vs. logical DR databases, as well as the debate about the correct terms to use (e.g. is the DR database an “active standby” or “active data guard” system?).
Over the past couple of days I’ve been testing a procedure for switching between the production system and the DR system. The environment is made up of two geographic locations (actually situated in two different cities, which is just like something from the official documentation). On both sites there are two RAC nodes, but only on the production site are both nodes up and running – the standby is open (in read only) as a single instance, or more accurately, one node of a two-node RAC.
The procedure for the switch over is straight forward enough but during testing I did encounter something unusual with one of the scripts used to ensure archive logs are being applied to the standby. Here’s the query that caused the problem:
SELECT SEQUENCE#, TO_CHAR(FIRST_TIME, 'DD-MON-YY HH24:MI:SS') FIRST_TIME, TO_CHAR(NEXT_TIME, 'DD-MON-YY HH24:MI:SS') NEXT_TIME, APPLIED FROM V$ARCHIVED_LOG WHERE THREAD#=1 ORDER BY SEQUENCE#;
This query returns details of log application coming from thread #1, i.e. the first node of the source RAC – changing the thread number to 2 gets details of the logs from the other node. The end of the output of that query is:
SEQUENCE# FIRST_TIME NEXT_TIME APPLIED ---------- ------------------ ------------------ --------- 48192 08-JUL-13 22:33:16 09-JUL-13 00:30:17 YES 48193 09-JUL-13 00:30:17 09-JUL-13 00:56:27 NO 48193 09-JUL-13 00:30:17 09-JUL-13 00:56:27 NO 48193 09-JUL-13 00:30:17 09-JUL-13 00:56:27 NO 48193 09-JUL-13 00:30:17 09-JUL-13 00:56:27 YES
Never mind the Yes’s and No’s for now and instead focus your attention on the NEXT_TIME date, the 9th July 2013…. Today is the 24th of July! Where are today’s logs? Strangely, the alert log for the standby database is showing that everything is OK and that recovery of logs is proceeding as expected. It turns out that the problem is down to the Incarnation of the database and a misplaced ORDER BY clause.
For more on database incarnations you really should check out the official documentation, particularly the neat diagram that really explains what’s happening when you issue a RESETLOGS command, you can find all that here: http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmrvcon.htm#BRADV1171
In my case, the incarnation had moved on (as a result of something we were trying out last week) and so log sequence numbers are now being reused. The query above orders its results by the SEQUENCE# and so the highest sequence number displays last, but those high log sequence numbers were generated by a previous incarnation of the database and so are not current. A simple change to the query reveals the truth of the situation:
SELECT SEQUENCE#, TO_CHAR(FIRST_TIME, 'DD-MON-YY HH24:MI:SS') FIRST_TIME, TO_CHAR(NEXT_TIME, 'DD-MON-YY HH24:MI:SS') NEXT_TIME, APPLIED FROM V$ARCHIVED_LOG WHERE THREAD#=1;
With the ORDER BY removed, the results of the query look a lot more reassuring:
SEQUENCE# FIRST_TIME NEXT_TIME APPLIED ---------- ------------------ ------------------ --------- 3208 24-JUL-13 14:11:52 24-JUL-13 14:11:54 YES 3209 24-JUL-13 14:11:54 24-JUL-13 14:11:59 YES 3210 24-JUL-13 14:11:59 24-JUL-13 14:12:02 YES 3211 24-JUL-13 14:12:02 24-JUL-13 14:13:57 YES 3212 24-JUL-13 14:13:57 24-JUL-13 14:14:50 YES
Note the correct date and the low, low log sequence numbers!
Keeping track of the time, so to speak, when the database incarnation moves on is something that can catch any DBA but is one of those fun little advanced topics, along with Data Guard, that adds a little spice to the day.
The programmatic nature of Oracle’s Recovery Manager tool provides great flexibility but also creates a common problem of insecure scripts being left running on production servers allowing anyone to read critical password details with ease. Continue reading “Securing RMAN Scripts”
Shared Storage systems are very common in enterprise computing but it’s not cheap or easy to get access to the technology for eduction or testing purposes. In this posting I investigate if a system like Openfiler can work in those circumstances, and if it’s worth the effort. Continue reading “SAN Simulation with Openfiler”
If you ever thought that keeping up with the Joneses was tough, you should try keeping your skills up to date in Information Technology!
When not everything in the data centre is your responsibility it can can be easy to loose track of what’s actually in there. If that happens what can you do to get back control? Continue reading “Admins Don’t Know Their Own Systems? Really?”
My diary today is blank. Of course, the fact that I’ve nothing scheduled to do doesn’t mean that I’ve nothing to do. In the life of a consultant an empty day in the diary can be a Godsend allowing for the all important admin to be done, leading to a day filled with expense claim submission, outstanding paperwork being filed, laptops getting some much needed systems administration attention, personal projects being followed up on (like the in-house test servers I manage), and clients who haven’t been in touch for a while getting a courtesy call to ensure that everything is OK with them. These days are much needed and welcome. But not too often.
It can be easy as a consultant, who lives and dies by the contents of the diary, to look at their full calendar for the next few weeks and despair at how busy they’re going to be, overwhelmed by the volume of work coming their way and longing for empty days like today. I like to take a different, more entrepreneurial view of the diary and try not to think of it as merely a calendar of upcoming activities but rather as my order book, as a promise of future work, and that puts the whole thing into a more favourable light.
Regardless of how you look at your busy schedule, the fact is that managing the diaries of multiple people can be a complex task and should be handled with some care as mismanaging it can lead to disaster. With this in mind, here are some of my considerations for managing the diaries and therefore the time of consultants. Continue reading “5 Considerations for Brilliant Diary Management”