A Quick Raspberry Pi Project

Since about 2015 or so, it seems like there are always Raspberry Pi’s lying around the place, at work and home, just waiting for some project to come along to put them to use.

One of the Pi’s looking for a job as part of my home lab has found just itself retasked as a Plex Media Server following a very simple process.

  • A new Operating System was written onto the SDCard, this time out the OS chosen was the Raspberry Pi variant of Ubuntu Server order to have a more consistent experience across systems being used lately.
  • Once up and running and fully patched, the Plex repo was added and the Media Server installed via apt.
  • Directories were added to host the media library and the Plex configuration ran. Once complete it was a simple matter of connecting the TV to the new media server via the Plex app and enjoying the content I had that was driving this little endeavour in the first place.

I heard it said recently that a Raspberry Pi with decent enough storage can be purchased for less then the price of running an EC2 instance for a year, so when it fits the use case a Pi can be a great solution and even in the age of cloud there are still times when it’s better to run locally (streaming dodgy media being a classic).



Old Books, Russian Hip-Hop, and Database Imports

The value of learning about lots of different topics really comes to the fore when you spot a connection, however tenuous, between things that on the surface have no bearing on each other at all.

I recently picked up a copy of the first Joel on Software book having read the second book in the series and once been a fan of the blog, it seemed only right to revisit the insights into software development that Joel Spolsky shared with the world in the early 2000’s. In the fast paced world of technology the interest in this material, dating as it does from a pre-Agile and pre-Cloud age, is in considering what foundational and unchanging lessons are worth a refresher on as well as considering how things have changed since that time. The biggest surprise so far has been somewhat foundational as the chapter on character sets (The Absolute Minimum Every Software Developer Absolutely, Positively, Must Know About Unicode and Character Sets (No Excuses!)) has really made me reconsider a topic I felt I was pretty familiar with already.

So, this is the “old” book, but what’s that got to do with Russian hip-hop?

I’ve discussed my love of German industrial metal music in the past (in this post), safe in the knowledge that like many fans of the genre around the world, my inability to speak German has little to do with my enjoyment of the music and can in many ways heighten it through the air of mystery that surrounds the unknown. This mindset has proven useful in building an appreciation for other international music (Swedish Melodic Death Metal, obviously, but also French rock/metal, but that’s another story), and into this open-mind dropped some Russian hip-hop.

While watching a very dodgy Russian action movie late one night, I was rather taken with the track that played over the end credits, in that “I don’t understand the lyrics, but I love the tune” way familiar to Rammstein fans. However, not wanting to get caught out humming along to something grossly inappropriate, I decided to check out the lyrics on iTunes and found myself a most wonderful surprise in how they were presented in Russian Cyrillic, something I thought particularly cool as I still had thoughts of character sets dancing through my head along with a new found appreciation of how hard this type of thing can be to deliver. (I don’t know if there’s an official video anywhere, but check out this link to the tune that caught my attention: https://youtu.be/7yW0T04DCQ0)

So, here’s the Russian hip-hop, but where’s the (inevitable) database connection?

My recent interest in building an Oracle environment is related to restoring a dump file of a very old database of mine for educational purposes. A very, very old database of mine that was originally hosted on a Windows server many, many years ago. The dump file is intact however and the import was going fine right up until it until it wasn’t and a review of the log file was needed. In there lurked this error (more than once):

ORA-12899 value too large for column

This odd contradiction (how could a column be created too small when the import was creating the column as well as importing the data into the column so is fully “aware” of every aspect) is one of those misleading errors – the problem is character set related, caused by a mismatch between the source database and the target being too great to be resolved by the import process.

The solution to the import problem was to create another throw-away database configured with the character set of the import and from there to either clone the PDB or more interestingly to take a look at the Database Migration Assistant for Unicode (DMU) as a more permanent fix, but for me the funny thing is how you notice how the dots can be joined between radically disconnected things when you know or are at least aware of what to look for.

Why Install Oracle 21c on OL8 via RPM?

Far too often in home lab situations, where there’s some interesting idea to be explored, efforts get derailed as a result of sinking too much time into the environment setup particularly when something like Oracle Database is involved. Maybe the the initial idea is to play with data migration or to investigate some new feature or just to brush up on skills, but whatever the scenario there’s a need for an environment and then, just like that, a week has passed and you’re just about ready having hunted down all the obscure errors and issues thrown up as the setup has progressed, when you get distracted by something else as the idea you wanted to work on has been crushed by the weight of getting things ready.

For reasons that I suspect are more related to Cloud automation than home labs, Oracle has managed to package up the install of the database on Linux (up to Enterprise Edition!) into an RPM that actually works. They’ve been trying to do this for a while and previous efforts were a mixed bag of results that were better regarded as interesting curiosities as opposed to real and useful things, but now the 21c release RPM seems to be both very real and actually useful, in certain circumstances anyway.

Oracle has used an RPM to prepare hosts for database installation for some time now but the main install via RPM was just not right or simply didn’t work first time and as it wasn’t something worth putting time into as you’d never want to install this way for real the RPM install just wasn’t a thing, but times have changed and whatever the reason behind the development the RPM database install now works to the point where it’s useful for home labs, demos, and other throwaway scenarios, making it far easier to setup an environment with a handful of commands; one for the pre-install RPM, one for the Database install RPM, and one for a basic database configuration the sets up a container database and a pluggable database.

The RPM based install is incredibly handy for setting up non-production, “throw away to the point of being ephemeral” style environments or for testing out automation but the lack of control over the installation rules this approach out for production or other long-living systems where you’d need to know what was going on under the hood or if there were any configuration elements you want to change at install time; in many ways the RPM install is a lot like using a managed database from a cloud provider where the internal workings have been abstracted away and some functionality lost or restricted, which can be ideal depending on the use case.

From the perspective of the home lab the RPM based install is perfect for rapidly setting up a system for testing and learning, as long as the topic under consideration isn’t “How to setup Oracle in Production”.

Oracle RPM installer can be found here: https://www.oracle.com/database/technologies/oracle21c-linux-downloads.html

How to Deploy Hubot

Finding a good excuse to learn a new technology is something I’ve been interested in for a while particularly as the pace of change and technology adoption only ever seems to be increasing. Knowing that the IT world is in an ongoing state of flux, finding an engaging way to learn about the tech that technologists are expected to know is more than idle curiosity, it’s essential for career success.

Revisiting Chatbots, a topic I encountered through a Wired article from summer 2015, I discovered that getting a chatbot up and running, and more importantly, useful and usable, involved a wide variety of different technologies. As the article was heaped so much praise on Github’s bot, I decided to try it out with a view to having it do some useful tasks around AWS administration, one of the two challenges that had been put up to our 2016 interns.

This led me to having to work with a number of interesting and topical technologies, including:

  1. Hubot – the Github chatbot
  2. Node.js and npm
  3. Git
  4. Redis
  5. AWS CLI
  6. systemd
  7. Slack
  8. Coffeescript

To get started, as I wanted my bot to work with AWS, it was necessary to prepare a couple of things on that side.

I created an AWS IAM account for my bot to use. I didn’t assign a password to the account as I don’t intend for the account to be used interactively (i.e. for logging onto the console), what I was after was an access key and secret key that can be used with the AWS Command Line and therefore could be used by the bot.

With the AWS pieces in place, I moved on to setting up the host in AWS itself, and selected a Red Hat Enterprise Linux 7.2 machine to act as my Hubot host, as this would approximate a production deployment.

Onto my new RHEL server, I installed and configured the AWS CLI.

$ curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 6984k 100 6984k 0 0 4106k 0 0:00:01 0:00:01 --:--:-- 4105k

$ unzip awscli-bundle.zip
Archive: awscli-bundle.zip
inflating: awscli-bundle/install
inflating: awscli-bundle/packages/jmespath-0.9.0.tar.gz
inflating: awscli-bundle/packages/simplejson-3.3.0.tar.gz
inflating: awscli-bundle/packages/botocore-1.4.43.tar.gz
inflating: awscli-bundle/packages/ordereddict-1.1.tar.gz
inflating: awscli-bundle/packages/awscli-1.10.53.tar.gz
inflating: awscli-bundle/packages/rsa-3.4.2.tar.gz
inflating: awscli-bundle/packages/futures-3.0.5.tar.gz
inflating: awscli-bundle/packages/docutils-0.12.tar.gz
inflating: awscli-bundle/packages/s3transfer-0.1.1.tar.gz
inflating: awscli-bundle/packages/colorama-0.3.7.zip
inflating: awscli-bundle/packages/argparse-1.2.1.tar.gz
inflating: awscli-bundle/packages/pyasn1-0.1.9.tar.gz
inflating: awscli-bundle/packages/virtualenv-13.0.3.tar.gz
inflating: awscli-bundle/packages/python-dateutil-2.5.3.tar.gz
inflating: awscli-bundle/packages/six-1.10.0.tar.gz

$ sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
Running cmd: /bin/python virtualenv.py --python /bin/python /usr/local/aws
Running cmd: /usr/local/aws/bin/pip install --no-index --find-links file:///home/ec2-user/Downloads/awscli-bundle/packages awscli-1.10.53.tar.gz
You can now run: /usr/local/bin/aws –version

$ aws configure

Up next was configuring the EPEL yum repository so that I could use Yum to install things like Redis.

$ wget -r --no-parent -A 'epel-release-*.rpm' http://dl.fedoraproject.org/pub/epel/7/x86_64/e/
$ sudo rpm -Uvh dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-*.rpm
warning: dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY
Preparing...                ################################# [100%]
Updating / installing...
1:epel-release-7-8          ################################# [100%]
$ ll /etc/yum.repos.d
total 32
-rw-r--r--. 1 root root  957 Jul 23 17:37 epel.repo
-rw-r--r--. 1 root root 1056 Jul 23 17:37 epel-testing.repo
-rw-r--r--. 1 root root  358 Nov  9  2015 redhat.repo
-rw-r--r--. 1 root root  607 Aug  9 04:08 redhat-rhui-client-config.repo
-rw-r--r--. 1 root root 8679 Aug  9 04:08 redhat-rhui.repo
-rw-r--r--. 1 root root   80 Aug  9 04:08 rhui-load-balancers.conf

With the appropriate repos in place, I was able to install redis

$ sudo yum install redis –y

$ sudo systemctl start redis.service

$ sudo systemctl status redis.service

redis.service - Redis persistent key-value database

Loaded: loaded (/usr/lib/systemd/system/redis.service; disabled; vendor preset: disabled)

Drop-In: /etc/systemd/system/redis.service.d

Active: active (running) since Tue 2016-08-09 04:37:35 EDT; 5s ago

Main PID: 4971 (redis-server)
CGroup: /system.slice/redis.service
└─4971 /usr/bin/redis-server

Aug 09 04:37:35 ip-192-168-20-246.eu-west-1.compute.internal systemd[1]: Started Redis persistent key-value database.
Aug 09 04:37:35 ip-192-168-20-246.eu-west-1.compute.internal systemd[1]: Starting Redis persistent key-value database...

Hubot is basically a bunch of node.js, so it was necessary to get it and the Node Package Manager installed too.

wget https://nodejs.org/dist/v6.3.1/node-v6.3.1-linux-x64.tar.xz

sudo tar --strip-components 1 -xf /home/ec2-user/Downloads/node-v6.3.1-linux-x64.tar.xz

I called my Hubot “Io” (as in I/O), and created a directory for Io to live in. Once installed, Hubot can be started in a basic “shell” mode that enables testing and local interactions with the bot on the host server.

$ sudo npm install -g yo generator-hubot

$ cd /io

$ yo hubot

$ bin/hubot

At this point, a working Hobot is in place but it isn’t very useful. In order to get more out of him, Hubot can be extended by adding scripts. In the fine tradition of Open Source, many scripts that others have developed are available to download and install via NPM, and of course you can create your own.

Up next, extending Hubot through scripts.

Solving Problems with Chatbots

Employee engagement is a big issue for any company, put simply disengaged employees leads to departing talent, so it’s vital to constantly be looking for ways to develop teams, especially those working with technology, in ways that result in high levels of ongoing engagement.

During a recent conversation about engagement and issues around internal communications, a colleague of mine suggested making better use of the screens we have deployed around the office to help keep everyone in the loop and up to date. This is an admirable idea but like all comms plans depends on the content more so than the delivery mechanism; the medium may be the message, but it’s important to have a message in the first place!

The conversation about content reminded me of an article in Wired from about a year ago about Github and how they had deployed a chatbot. What was interesting for me in that article was the story of how Github employees are extending the chatbot’s capabilities by scripting new features. What makes this doubly interesting is that not only are the engineers at Github doing this but so are non-technical people. One example in the piece was about someone from the marketing department creating a script that their chatbot uses to check on the status of local street vendors.

It occurred to me that a chatbot system like this could help out with two focus areas for developing team engagement. Firstly, it could provide that content that can be so hard to source. By making a conversational interface to a content management and deployment system, the process of gathering and displaying interesting material could be dramatically improved by providing a reason to generate content as it would give everyone an excuse to interact with the cahtbot. Secondly, having a chatbot system in place could provide an outlet for non-developers who want to spend some time learning development as it could provide a purpose beyond developing the standard issue “Hello World” script, without which too many people abandon their learning efforts.

Content generation would be part of the Workflow use case for a bot

Every summer we get a bunch of interns, and in addition to their regular assigned work they are tasked with completing a technical challenge. The challenge is meant to be, well challenging, and this year we settled on Chatbots (I suspect this was entirely my fault and an abject lesson in reaping what you sow)! One team of interns was tasked with setting up a bot that can field queries about one part of the business, while the other (my team) looked at connecting a bot to AWS in order to complete Cloud tasks through a chat interface.

It is something of a tradition that while the interns get on with developing a solution in their own way, I go ahead and do the challenge myself in my own way to determine if it can be done and how differently I’d tackle the solution over how the interns go about it.

As the interns had gone down the AIML route, I decided to deploy Hubot, Github’s own chatbot.


Up next: Getting started with Hubot

Playing with Oracle Enterprise Manager

Quiet Friday’s are a nice treat but something too often squandered. This past Friday I found myself left to my own devices as the rest of the DBA team were working from another location. This left me free to get a few things about running our environment straight in my own mind with the most notable being was how to use CommVault for database backup and recovery beyond simply copying RMAN output to tape.

I have access to a test server for playing about with such things so I created a test database with the sample schemas to use for my backup and recovery testing. This also presented an ideal opportunity to look at the capabilities of OEM in and around monitoring databases while something bad happens to them (ahead of my recovery tests) so I first needed to install the OEM agent and the relevant options (host, databases, listener, ASM). The OEM agent went in without too much fuss (just a few things needed editing regarding SUDO and the DBSNMP database user account) and OEM got down to its monitoring duties quickly. This was a little unexpected as yesterday I had an issue with OEM getting stuck in “status pending” for one of our database instances (in that case the EMCTL command CLEARSTATE fixed it) so I was half expecting to hit a problem today. Our OEM system is configured with groups like “Database Instance” set up to send monitoring emails to the DBA team so the addition of my test system into the appropriate groups did trigger email notifications as I brought the database up and down as part of my CommVault tests.

On the CommVault front, the agent went onto the server also with surprising ease once you know to how to properly configure access permissions for the agent on Linux systems (I’ve seen this trip up administrators in the past so I knew that there were additional steps to be performed on the host beyond just deploying the agent). Once the database had the correct media policy applied it was a simple matter to run a backup. The hard part about CommVault is remembering that, with regards to Oracle, it is essentially running an RMAN backup, so dealing with database issues like dropped objects and schemas (which is what I was testing) need to be dealt with via a Point in Time recovery in order to be successful. Also, whenever CommVault throws an error, it’s best to analyse the associated RMAN log for the cause as the CommVault GUI (version 9 anyway) doesn’t reveal too much in the line of detail.

Last Friday thankfully wasn’t squandered and my initial foray into deploying OEM and CommVault now opens the door to some further adventures; I am particularly interested in deploying both of those systems to SQL Server (especially OEM!).

Oracle Data Guard and Database Incarnations

// Data Guard // Database Incarnations // Standby Logs applying in Alert Log but not in V$ARCHIVED_LOG

Over the past week I’ve been getting a great introduction to the practical workings of Data Guard. In the past I’ve worked a lot with Disaster Recovery systems built using Oracle standard edition and therefore not licensed to use Data Guard and in those circumstances a poor man’s version of the system was put in place by using RSYNC to synchronise archive logs between a production database and a DR database. With the logs in place, a script run on a schedule would recover the data from the logs. It turns out that the concept of Data Guard is pretty similar in that it’s basically about getting archive logs to the right place and setting the destination database into a mode where it can read those logs and be ready for when disaster strikes.

Using Data Guard in a production setting makes for interesting times as you get into the hows and whys of setting up the Data Guard source and destinations and then on into considerations like physical vs. logical DR databases, as well as the debate about the correct terms to use (e.g. is the DR database an “active standby” or “active data guard” system?).

Over the past couple of days I’ve been testing a procedure for switching between the production system and the DR system. The environment is made up of two geographic locations (actually situated in two different cities, which is just like something from the official documentation). On both sites there are two RAC nodes, but only on the production site are both nodes up and running – the standby is open (in read only) as a single instance, or more accurately, one node of a two-node RAC.

The procedure for the switch over is straight forward enough but during testing I did encounter something unusual with one of the scripts used to ensure archive logs are being applied to the standby. Here’s the query that caused the problem:


This query returns details of log application coming from thread #1, i.e. the first node of the source RAC – changing the thread number to 2 gets details of the logs from the other node. The end of the output of that query is:

---------- ------------------ ------------------ ---------
48192 08-JUL-13 22:33:16 09-JUL-13 00:30:17 YES
48193 09-JUL-13 00:30:17 09-JUL-13 00:56:27 NO
48193 09-JUL-13 00:30:17 09-JUL-13 00:56:27 NO
48193 09-JUL-13 00:30:17 09-JUL-13 00:56:27 NO
48193 09-JUL-13 00:30:17 09-JUL-13 00:56:27 YES

Never mind the Yes’s and No’s for now and instead focus your attention on the NEXT_TIME date, the 9th July 2013…. Today is the 24th of July! Where are today’s logs? Strangely, the alert log for the standby database is showing that everything is OK and that recovery of logs is proceeding as expected. It turns out that the problem is down to the Incarnation of the database and a misplaced ORDER BY clause.

For more on database incarnations you really should check out the official documentation, particularly the neat diagram that really explains what’s happening when you issue a RESETLOGS command, you can find all that here: http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmrvcon.htm#BRADV1171

In my case, the incarnation had moved on (as a result of something we were trying out last week) and so log sequence numbers are now being reused. The query above orders its results by the SEQUENCE# and so the highest sequence number displays last, but those high log sequence numbers were generated by a previous incarnation of the database and so are not current. A simple change to the query reveals the truth of the situation:


With the ORDER BY removed, the results of the query look a lot more reassuring:

---------- ------------------ ------------------ ---------
3208 24-JUL-13 14:11:52 24-JUL-13 14:11:54 YES
3209 24-JUL-13 14:11:54 24-JUL-13 14:11:59 YES
3210 24-JUL-13 14:11:59 24-JUL-13 14:12:02 YES
3211 24-JUL-13 14:12:02 24-JUL-13 14:13:57 YES
3212 24-JUL-13 14:13:57 24-JUL-13 14:14:50 YES

Note the correct date and the low, low log sequence numbers!

Keeping track of the time, so to speak, when the database incarnation moves on is something that can catch any DBA but is one of those fun little advanced topics, along with Data Guard, that adds a little spice to the day.