A little bit of optimism can go a long way…

Sometimes I feel like I.T. and Optimism are polar opposites.  You can do a quick google search across any number of general forums (r/sysadmin, r/networking, r/itdept, stackoverflow, sqlservercentral, etc…) and you’re bound to find a good amount of frustration and cynicism.  I don’t mean the kind of technical frustration that can be productive: I mean the really gritty kind that comes from a very embittered mind over the course of time.  I don’t know about you, dear reader, but I didn’t get into I.T. to hate my life or my job.  I got into I.T. because of the sheer love of technology and a desire to use that love and passion to help others; and subsequently add value to my prospective employers.  Oh yeah….and I knew that working in I.T. had the potential to reward me personally for that love and passion too!

I recently went through “a situation”.  Why do I use quotes around that term like I’m referring to a wrestler by the same name or being really sarcastic?  For neither of those reasons, rather, because I think we all deal with a similar situation at some point in our career, so allow me to ‘splain it:

 

Mr. Bad Boss

When I began one of my recent jobs, I walked into a very common situation:  An IT Manager that wasn’t actually managing the department, but had their hands in the details on EVERYTHING.  Don’t get me wrong:  i get that in some cases this is both desired and necessary.  This was NOT that case in this instance however.  This manager essentially kept everything under lock and key, and would delegate tasks as they saw fit.  This can work in smaller shops, but this was an enterprise that was growing exponentially.  This kind of management tactic was not sustainable, nor did it have a hope in hell of being successful.  Over the course of my first few months I witnessed the tornado that the department had become.  Instead of befriending the operation that this shop was there to support, the operation was the enemy,  and both sides felt the same way about each other (you can imagine how productive meetings were….).  Everything the department touched wasn’t planned out, had no real constructive approach and the talent we had on our team wasn’t being utilized to its full potential since the IT Manager wanted to dole out tasks as opposed to letting this team share in owning the department.  Many times our talented folks would get launched (seriously….figuratively catapulted) into a situation that was an emergency and it needed to be sorted out in negative amounts of time.  Nevermind the fact that Mr. Boss had kept this issue in his inbox for several weeks…

Longer story kinda shorter:  This manager was released from the organization and our department was left without a department head.

So we were left with a disgruntled department of talented folks who had been battered, bruised and beaten.  Complicating matters, upper management had a pretty bad view of the department as a whole, which was entirely due to this manager’s actions (or lack thereof).  So not only was the department beat up, they were very discouraged too.

Lots of uncertainty seemed to loom like a dark cloud hovering overhead, but the workload continued to flow in like a stampede of raging rhinos.

 

No More Mr. Bad Boss:  The Aftermath

So there’s “the situation”.  In the days that followed this manager’s release, we started down the track of making sure we had all of the tribal knowledge (aka undocumented information) collected and documented.  If you’ve ever been at the senior level of a department when the boss got let go, then you probably have a pretty good idea of who the boss’ day-to-day tasks fall to….  While there was no official “IT Manager” anymore, it was up to myself, another senior department team member (we’re gonna call him Teammate hereon out), and a few upper-tier support folks to keep the day-to-day humming along.

While we knew we had to keep the day-to-day going, myself and my other senior staff member knew the culture of the department had to change in order for our future to be a sustainable one.  This culture was very toxic for both of the people in the department and to the business.  Here’s a quick list of some of the common issues we were aiming to change:

  • Support techs were terrified to attempt to troubleshoot anything in fear of breaking an issue further.
  • Pairing with Item 1:  Problems were immediately escalated to top-tier engineers as opposed to going through some sort of troubleshooting and escalation path.
  • Lots of EGO.  Everyone thought they had to know everything, and since the prior management had berated them when they didn’t know something, the EGO was that much stronger.
  • Operation vs. IT mentality (and vice versa) had to be rectified somehow.
  • Unbalanced pay rates across like-positions (and everyone knew about it….WAT).
  • Overall technical skillset of the department was lacking due to the “off-hands” ideology that previous management had employed.
  • A lot of really scared, burned out, overworked and discouraged employees who felt like things would always be this way.

In addition to making every effort to change the department’s culture on the points above, we also discovered that many back-end systems weren’t operating correctly (i.e. backups, Network Topologies, Managed Services, etc…).  Need a backup from a week ago?  Might be able to do that.  Need to troubleshoot that VPN tunnel?  Might be just like the other tunnels…..or not.  We had our work cut out for us, and over the course of the next few months we made every effort to steer the department in the right direction.

 

Out with the bad, in with the GOOD

I have to be honest here:  if it would have been me in the place of some of our support folks, I can’t say that I could have found it in me to muster up a “can-do” attitude.  Not after the barrage of junk they had to put up with.  And to be fair to them, the business didn’t do anything to help them even though they saw it was a problem.  I wouldn’t have had a whole lot of faith that this was going to change anytime soon; regardless of any change in leadership.

But myself and my Teammate had a perspective that the rest of our department did not, and for a variety of reasons.

Reason 1:  We’d been doing this for awhile.  Most of our team was pretty green and hadn’t been through anything like this before.  While I hadn’t experienced this within an IT Support Department specifically, this wasn’t my first dealing with a re-org or losing a superior.

Reason 2:  We saw the light at the end of the tunnel from a technical standpoint.  All of these problems were fixable, it was just going to take time to fix each issue.  Being a 24×7 operation provides challenges when you want to do things like….rebuild the entire network.
(╯°□°)╯︵ ┻━┻

Reason 3:  We had a lot of great backing from upper management.  Upper management realized that a change was needed, and they were willing to both help with that change in terms of moral support and funding the department to make the necessary technical changes.  This was probably the biggest reason for our optimism.  After all, if the business isn’t willing to fund your fixes and back your efforts, you’re in a no-win situation.

 

Moving Forward to Better Times

We started having daily conversations with our team in an effort to interject optimism into their day-to-day.  At the same time, we gave them a much needed outlet to vent their pent up frustrations – but made every effort to assure their venting was productive and didn’t sink into bitterness.  We also went above and beyond to make sure they could reach out to us whenever they needed to for technical assistance.  However, unlike before with Mr. Bad Boss, when they escalated a problem to us we would take the opportunity to teach them about the problem and how to resolve it.  We didn’t care that the business was jumping up and down to get ABC back online – they needed to learn and we needed to teach (within reason of course – we’re not about the business losing money or anything).

After a few months of this, the entire department was much more optimistic about the future.  They also had a sense of ownership in our environment since they were now engaged in helping to fix it when problems came up.  And then we signed on a new IT Director that my Teammate and I had direct input into (yep – we got to hire our boss).  He brought a lot of experience and technical expertise to the table and turned out to be a perfect fit – hooray!

2 Months into the new IT director being there and the department didn’t look anything like it had before:  Folks were pretty happy coming into work for a change.  Folks who needed to review their compensation got that opportunity.  A few folks were promoted to supervisor-level roles.  We hired some additional upper-tier folks who brought much needed expertise to the department.  With the help of the IT Director we created a kickass team of folks who all wanted our department to be a success.  Being a part of a great team goes a long way in the face of a technical mess that will take a considerable amount of time to completely clean up.

 

Final Thoughts

Being optimistic in the face of depressing uncertainty is hard – it just is.  It takes effort and isn’t driven by emotion – you have to act your way into feeling optimistic sometimes.  But a little optimism and effort can absolutely change everything.

These aren’t the DR-plans you’re looking for…

[author] [author_image timthumb=’on’]https://media.licdn.com/mpr/mpr/shrinknp_400_400/p/4/000/17f/098/1d70b7c.jpg[/author_image] [author_info]Chase Marler is a database administrator residing in the midwest region of the United States. He’s building accolades now, so hopefully we’ll have some cool stuff to tell you about him someday. But for now, he’s an average joe trying to make it just like you![/author_info] [/author] I have to apologize for the lame Star-Wars-themed title, but I’m going to see The Force Awakens tonight and I’m just so dang excited!  Moving right along…

At a recent job, I was tasked with putting together a better disaster recovery plan for my employer.  While a lot of folks loathe DR planning, it’s actually one of my favorite kinds of projects to plan and implement.  For a little background about the business:

  • They’re very new to the overall concept of an I.T. department – less than 4 years ago they were almost exclusively using paper.  Yes. Paper.  And some pencils I would imagine. Pens were probably reserved for the most elite paper-writers of that time (gel pen = precious!).
  • In the previous two years to my time there, they’ve shored up their I.T. staff by a factor of…well a lot.  Two years prior they had around 4-6 I.T. employees (all general support).  By the time I got there, they had almost 15, with several specialties mixed in there.

The company had two major systems in place, both handling primarily OLTP-type workloads.  We’re anticipating exponential growth over the next 5 years.  As with any new environment, I spent my first 30(ish) days inventorying and reviewing everything I have in my environment:

  • What physical servers do I have and where are they located geographically?
  • How many database instances do I have and how many databases are in each?
  • How many databases do I have, how big are they, how fast are they growing, and what are their business requirements for DR and HA?
  • Probably the single most important question to ask when you’re new to an environment:  What do the backups and restores look like and/or what is the current DR strategy?  (hint:  there really wasn’t a DR strategy in place…)
  • …and the list goes on.

Sidebar:  I’d highly recommend checking out Brent Ozar’s SQL Server Inventory Worksheet if you haven’t already.  I have my own flavor of this worksheet now, but his checklist template is what my document was derived from.  The template is included in Brent Ozar’s free First Aid Kit – you can find it here.

After my inventory was complete I knew I had several instances, all with different versions of SQL Server (except for a couple express 2014 instances), all using different HA technologies (or none at all), and I knew how much data was currently on hand.  More importantly, I now understood that if one of my boxes got kicked in the junk, I knew how much data I was on the hook to lose.

By this point, I had the entire database environment documented in a format that I was able to present to the business.  Unfortunately, no matter how much the facts don’t lie, businesses frequently have a hard time believing that the environment they’ve spent a small fortune on isn’t as capable or as protected from data loss as they thought.  If you’re presenting this information to your customer or employer, be gentle.  Pro Tip:  Reality hurts sometimes, doesn’t it?   If you want to get through to the business’ logical sense and past their emotional breakdown, you’re going to need to make sure you’re not presenting this information as if you’re the one holding the ever-hurting data loss hammer.  After all, you most-likely didn’t cause this problem and you’re there to help them out, so provide some soothing reassurance.  Maybe they need a soft kitty song – I dunno.  Think of it like this:  Remember the last time you lost some piece of critical personal data and how awful that was?  Think of that, then add your job security to it, and a hefty price tag for a setback in your career.  When businesses lose data, they lose money and sometimes credibility, and the effects of that take awhile to get past.

For discretionary purposes, I’m not going to get into the actual specifics about the condition of the company’s environment.  But let’s just say that if the data loss gap was supposed to be on Par, they were inside bowling and not paying attention to the golf game happening outside…

I presented my findings to the business as gently and matter-of-factly as I was able to, offered them a tissue, and then asked if we could sit down together and establish RTO and RPO.  If you don’t know what RPO and RTO are, stop reading this now and go read about these acronyms here.

The worksheet I linked to above is a great conversation starter.  Brent does a nice job of staying as objective as possible, and leaving opinions and emotion out of the conversation as much as possible.

My employer elected to look at implementing something as soon as possible (containment), and then review their DR/HA strategy in a more long-term way once the immediate solution was deployed.  At the end of the day, we implemented:

  • Ola Hallengren’s Maintenance Solution (read more about it here – and its free!)
    • It’s worth noting that there isn’t a much better solution on the free market today.  If you’ve never seen this solution, you owe it to yourself (and to your sanity) to load this in a test environment and take it for a spin.
  • Moved any mission-critical database that was in Simple Recovery, to Full and started taking log backups immediately/frequently.
  • Moved the destination of all backups to a separate NAS that was being backed up to tape and to the cloud more frequently.
    • We eventually implemented an auto-restore to a dev/test server for backup restoration verification as well.
  • Implemented a short-term, immediate-containment DR solution using a remote site and Log Shipping.

At the end of the day, we all evaluate risk differently.  And as a DBA, you have a uniquely-awesome skillset to evaluate and resolve data risks for your clients or employers.  If you can learn how to effectively communicate yours skills to general business folks, you’re gonna have a lot of success stories to share!

I’d also like to take this opportunity to say thanks for stopping by my blog.  This venture is completely new and foreign to me and I’m looking forward to starting the conversation with all you internet folk.  See ya next time!